00:00:00.001 Started by upstream project "autotest-per-patch" build number 122820 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.009 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.045 Using shallow fetch with depth 1 00:00:00.045 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.045 > git --version # timeout=10 00:00:00.066 > git --version # 'git version 2.39.2' 00:00:00.066 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.070 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.070 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.246 ERROR: Error fetching remote repo 'origin' 00:00:07.246 hudson.plugins.git.GitException: Failed to fetch from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.246 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:999) 00:00:07.246 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1241) 00:00:07.246 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1305) 00:00:07.246 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129) 00:00:07.246 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:165) 00:00:07.246 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:71) 00:00:07.246 at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:311) 00:00:07.246 at hudson.model.ResourceController.execute(ResourceController.java:101) 00:00:07.246 at hudson.model.Executor.run(Executor.java:442) 00:00:07.246 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master" returned status code 128: 00:00:07.246 stdout: 00:00:07.246 stderr: fatal: unable to access 'https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool/': server certificate verification failed. CAfile: none CRLfile: none 00:00:07.246 00:00:07.246 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2842) 00:00:07.246 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2185) 00:00:07.246 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:635) 00:00:07.246 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:997) 00:00:07.246 ... 8 more 00:00:07.246 ERROR: Error fetching remote repo 'origin' 00:00:07.246 Retrying after 10 seconds 00:00:17.247 The recommended git tool is: git 00:00:17.247 using credential 00000000-0000-0000-0000-000000000002 00:00:17.248 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:17.271 Fetching changes from the remote Git repository 00:00:17.273 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:17.306 Using shallow fetch with depth 1 00:00:17.306 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:17.306 > git --version # timeout=10 00:00:17.321 > git --version # 'git version 2.39.2' 00:00:17.321 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:17.322 Setting http proxy: proxy-dmz.intel.com:911 00:00:17.322 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:17.412 ERROR: Error fetching remote repo 'origin' 00:00:17.412 hudson.plugins.git.GitException: Failed to fetch from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:17.412 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:999) 00:00:17.412 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1241) 00:00:17.412 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1305) 00:00:17.412 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129) 00:00:17.412 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:165) 00:00:17.412 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:71) 00:00:17.412 at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:311) 00:00:17.412 at hudson.model.ResourceController.execute(ResourceController.java:101) 00:00:17.412 at hudson.model.Executor.run(Executor.java:442) 00:00:17.412 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master" returned status code 128: 00:00:17.412 stdout: 00:00:17.412 stderr: fatal: unable to access 'https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool/': server certificate verification failed. CAfile: none CRLfile: none 00:00:17.412 00:00:17.412 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2842) 00:00:17.412 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2185) 00:00:17.412 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:635) 00:00:17.412 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:997) 00:00:17.412 ... 8 more 00:00:17.412 ERROR: Error fetching remote repo 'origin' 00:00:17.412 Retrying after 10 seconds 00:00:27.413 The recommended git tool is: git 00:00:27.413 using credential 00000000-0000-0000-0000-000000000002 00:00:27.415 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:27.426 Fetching changes from the remote Git repository 00:00:27.428 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:27.440 Using shallow fetch with depth 1 00:00:27.440 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:27.440 > git --version # timeout=10 00:00:27.453 > git --version # 'git version 2.39.2' 00:00:27.453 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:27.454 Setting http proxy: proxy-dmz.intel.com:911 00:00:27.454 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:02:16.435 ERROR: Error fetching remote repo 'origin' 00:02:16.435 hudson.plugins.git.GitException: Failed to fetch from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:02:16.435 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:999) 00:02:16.435 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1241) 00:02:16.435 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1305) 00:02:16.435 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129) 00:02:16.435 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:165) 00:02:16.435 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:71) 00:02:16.435 at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:311) 00:02:16.435 at hudson.model.ResourceController.execute(ResourceController.java:101) 00:02:16.435 at hudson.model.Executor.run(Executor.java:442) 00:02:16.436 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master" returned status code 128: 00:02:16.436 stdout: 00:02:16.436 stderr: fatal: unable to access 'https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool/': CONNECT tunnel failed, response 500 00:02:16.436 00:02:16.436 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2842) 00:02:16.436 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2185) 00:02:16.436 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:635) 00:02:16.436 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:997) 00:02:16.436 ... 8 more 00:02:16.436 ERROR: Error fetching remote repo 'origin' 00:02:16.436 Retrying after 10 seconds 00:02:26.437 The recommended git tool is: git 00:02:26.437 using credential 00000000-0000-0000-0000-000000000002 00:02:26.439 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:02:26.451 Fetching changes from the remote Git repository 00:02:26.453 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:02:26.499 Using shallow fetch with depth 1 00:02:26.499 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:02:26.499 > git --version # timeout=10 00:02:26.520 > git --version # 'git version 2.39.2' 00:02:26.520 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:02:26.525 Setting http proxy: proxy-dmz.intel.com:911 00:02:26.525 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:02:30.105 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:02:30.116 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:02:30.127 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:02:30.127 > git config core.sparsecheckout # timeout=10 00:02:30.137 > git read-tree -mu HEAD # timeout=10 00:02:30.152 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:02:30.172 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:02:30.172 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:02:30.275 [Pipeline] Start of Pipeline 00:02:30.289 [Pipeline] library 00:02:30.291 Loading library shm_lib@master 00:02:30.291 Library shm_lib@master is cached. Copying from home. 00:02:30.309 [Pipeline] node 00:02:30.314 Running on FCP03 in /var/jenkins/workspace/dsa-phy-autotest 00:02:30.319 [Pipeline] { 00:02:30.332 [Pipeline] catchError 00:02:30.334 [Pipeline] { 00:02:30.345 [Pipeline] wrap 00:02:30.353 [Pipeline] { 00:02:30.361 [Pipeline] stage 00:02:30.362 [Pipeline] { (Prologue) 00:02:30.534 [Pipeline] sh 00:02:30.814 + logger -p user.info -t JENKINS-CI 00:02:30.834 [Pipeline] echo 00:02:30.835 Node: FCP03 00:02:30.843 [Pipeline] sh 00:02:31.140 [Pipeline] setCustomBuildProperty 00:02:31.152 [Pipeline] echo 00:02:31.154 Cleanup processes 00:02:31.159 [Pipeline] sh 00:02:31.442 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:31.442 3161354 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:31.455 [Pipeline] sh 00:02:31.738 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:31.738 ++ grep -v 'sudo pgrep' 00:02:31.738 ++ awk '{print $1}' 00:02:31.738 + sudo kill -9 00:02:31.738 + true 00:02:31.755 [Pipeline] cleanWs 00:02:31.766 [WS-CLEANUP] Deleting project workspace... 00:02:31.766 [WS-CLEANUP] Deferred wipeout is used... 00:02:31.772 [WS-CLEANUP] done 00:02:31.776 [Pipeline] setCustomBuildProperty 00:02:31.791 [Pipeline] sh 00:02:32.070 + sudo git config --global --replace-all safe.directory '*' 00:02:32.147 [Pipeline] nodesByLabel 00:02:32.148 Found a total of 1 nodes with the 'sorcerer' label 00:02:32.158 [Pipeline] httpRequest 00:02:32.163 HttpMethod: GET 00:02:32.163 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:32.167 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:32.170 Response Code: HTTP/1.1 200 OK 00:02:32.171 Success: Status code 200 is in the accepted range: 200,404 00:02:32.171 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:32.981 [Pipeline] sh 00:02:33.265 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:33.283 [Pipeline] httpRequest 00:02:33.288 HttpMethod: GET 00:02:33.288 URL: http://10.211.164.101/packages/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:33.289 Sending request to url: http://10.211.164.101/packages/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:33.302 Response Code: HTTP/1.1 200 OK 00:02:33.302 Success: Status code 200 is in the accepted range: 200,404 00:02:33.303 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:57.359 [Pipeline] sh 00:02:57.642 + tar --no-same-owner -xf spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:03:00.183 [Pipeline] sh 00:03:00.465 + git -C spdk log --oneline -n5 00:03:00.465 c06b0c79b nvmf: make allow_any_host its own byte 00:03:00.465 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:03:00.465 35948d8fa build: rename SPDK_MOCK_SYSCALLS -> SPDK_MOCK_SYMBOLS 00:03:00.465 69872294e nvme: make spdk_nvme_dhchap_get_digest_length() public 00:03:00.465 67ab645cd nvmf/auth: send AUTH_failure1 message 00:03:00.476 [Pipeline] } 00:03:00.492 [Pipeline] // stage 00:03:00.500 [Pipeline] stage 00:03:00.502 [Pipeline] { (Prepare) 00:03:00.517 [Pipeline] writeFile 00:03:00.533 [Pipeline] sh 00:03:00.847 + logger -p user.info -t JENKINS-CI 00:03:00.859 [Pipeline] sh 00:03:01.141 + logger -p user.info -t JENKINS-CI 00:03:01.152 [Pipeline] sh 00:03:01.428 + cat autorun-spdk.conf 00:03:01.428 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.428 SPDK_TEST_ACCEL_DSA=1 00:03:01.428 SPDK_TEST_ACCEL_IAA=1 00:03:01.428 SPDK_TEST_NVMF=1 00:03:01.428 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.428 SPDK_RUN_ASAN=1 00:03:01.428 SPDK_RUN_UBSAN=1 00:03:01.433 RUN_NIGHTLY=0 00:03:01.439 [Pipeline] readFile 00:03:01.459 [Pipeline] withEnv 00:03:01.461 [Pipeline] { 00:03:01.475 [Pipeline] sh 00:03:01.758 + set -ex 00:03:01.758 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:03:01.758 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:03:01.758 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.758 ++ SPDK_TEST_ACCEL_DSA=1 00:03:01.758 ++ SPDK_TEST_ACCEL_IAA=1 00:03:01.758 ++ SPDK_TEST_NVMF=1 00:03:01.758 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.758 ++ SPDK_RUN_ASAN=1 00:03:01.758 ++ SPDK_RUN_UBSAN=1 00:03:01.758 ++ RUN_NIGHTLY=0 00:03:01.758 + case $SPDK_TEST_NVMF_NICS in 00:03:01.758 + DRIVERS= 00:03:01.758 + [[ -n '' ]] 00:03:01.758 + exit 0 00:03:01.766 [Pipeline] } 00:03:01.784 [Pipeline] // withEnv 00:03:01.789 [Pipeline] } 00:03:01.806 [Pipeline] // stage 00:03:01.814 [Pipeline] catchError 00:03:01.816 [Pipeline] { 00:03:01.830 [Pipeline] timeout 00:03:01.830 Timeout set to expire in 50 min 00:03:01.831 [Pipeline] { 00:03:01.845 [Pipeline] stage 00:03:01.847 [Pipeline] { (Tests) 00:03:01.862 [Pipeline] sh 00:03:02.142 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:03:02.142 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:03:02.142 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:03:02.142 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:03:02.142 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:03:02.142 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:03:02.142 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:03:02.142 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:03:02.142 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:03:02.142 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:03:02.142 + cd /var/jenkins/workspace/dsa-phy-autotest 00:03:02.142 + source /etc/os-release 00:03:02.142 ++ NAME='Fedora Linux' 00:03:02.142 ++ VERSION='38 (Cloud Edition)' 00:03:02.142 ++ ID=fedora 00:03:02.142 ++ VERSION_ID=38 00:03:02.142 ++ VERSION_CODENAME= 00:03:02.142 ++ PLATFORM_ID=platform:f38 00:03:02.142 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:02.142 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:02.142 ++ LOGO=fedora-logo-icon 00:03:02.142 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:02.142 ++ HOME_URL=https://fedoraproject.org/ 00:03:02.142 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:02.142 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:02.142 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:02.142 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:02.142 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:02.142 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:02.142 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:02.142 ++ SUPPORT_END=2024-05-14 00:03:02.142 ++ VARIANT='Cloud Edition' 00:03:02.142 ++ VARIANT_ID=cloud 00:03:02.142 + uname -a 00:03:02.142 Linux spdk-fcp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:02.142 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:04.668 Hugepages 00:03:04.668 node hugesize free / total 00:03:04.668 node0 1048576kB 0 / 0 00:03:04.668 node0 2048kB 0 / 0 00:03:04.668 node1 1048576kB 0 / 0 00:03:04.668 node1 2048kB 0 / 0 00:03:04.668 00:03:04.668 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.668 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:03:04.668 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:04.668 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:04.668 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:04.668 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:04.668 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:04.668 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:04.668 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:04.668 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:04.668 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:03:04.668 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:04.668 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:04.668 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:04.668 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:04.668 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:04.668 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:04.668 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:04.668 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:04.668 + rm -f /tmp/spdk-ld-path 00:03:04.668 + source autorun-spdk.conf 00:03:04.668 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:04.668 ++ SPDK_TEST_ACCEL_DSA=1 00:03:04.668 ++ SPDK_TEST_ACCEL_IAA=1 00:03:04.668 ++ SPDK_TEST_NVMF=1 00:03:04.668 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:04.669 ++ SPDK_RUN_ASAN=1 00:03:04.669 ++ SPDK_RUN_UBSAN=1 00:03:04.669 ++ RUN_NIGHTLY=0 00:03:04.669 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:04.669 + [[ -n '' ]] 00:03:04.669 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:03:04.669 + for M in /var/spdk/build-*-manifest.txt 00:03:04.669 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:04.669 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:03:04.669 + for M in /var/spdk/build-*-manifest.txt 00:03:04.669 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:04.669 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:03:04.669 ++ uname 00:03:04.669 + [[ Linux == \L\i\n\u\x ]] 00:03:04.669 + sudo dmesg -T 00:03:04.669 + sudo dmesg --clear 00:03:04.669 + dmesg_pid=3162387 00:03:04.669 + [[ Fedora Linux == FreeBSD ]] 00:03:04.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:04.669 + [[ -x /usr/src/fio-static/fio ]] 00:03:04.669 + export FIO_BIN=/usr/src/fio-static/fio 00:03:04.669 + FIO_BIN=/usr/src/fio-static/fio 00:03:04.669 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:04.669 + sudo dmesg -Tw 00:03:04.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:04.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:04.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:04.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.669 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:03:04.669 Test configuration: 00:03:04.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:04.669 SPDK_TEST_ACCEL_DSA=1 00:03:04.669 SPDK_TEST_ACCEL_IAA=1 00:03:04.669 SPDK_TEST_NVMF=1 00:03:04.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:04.669 SPDK_RUN_ASAN=1 00:03:04.669 SPDK_RUN_UBSAN=1 00:03:04.669 RUN_NIGHTLY=0 00:39:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:03:04.669 00:39:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:04.669 00:39:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.669 00:39:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.669 00:39:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.669 00:39:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.669 00:39:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.669 00:39:51 -- paths/export.sh@5 -- $ export PATH 00:03:04.669 00:39:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.669 00:39:51 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:03:04.669 00:39:51 -- common/autobuild_common.sh@437 -- $ date +%s 00:03:04.669 00:39:51 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726391.XXXXXX 00:03:04.669 00:39:51 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726391.xdgwub 00:03:04.669 00:39:51 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:03:04.669 00:39:51 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:03:04.669 00:39:51 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:03:04.669 00:39:51 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:04.669 00:39:51 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:04.669 00:39:51 -- common/autobuild_common.sh@453 -- $ get_config_params 00:03:04.669 00:39:51 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:03:04.669 00:39:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.669 00:39:51 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:03:04.669 00:39:51 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:03:04.669 00:39:51 -- pm/common@17 -- $ local monitor 00:03:04.669 00:39:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.669 00:39:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.669 00:39:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.669 00:39:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.669 00:39:51 -- pm/common@25 -- $ sleep 1 00:03:04.669 00:39:51 -- pm/common@21 -- $ date +%s 00:03:04.669 00:39:51 -- pm/common@21 -- $ date +%s 00:03:04.669 00:39:51 -- pm/common@21 -- $ date +%s 00:03:04.669 00:39:51 -- pm/common@21 -- $ date +%s 00:03:04.669 00:39:51 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726391 00:03:04.669 00:39:51 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726391 00:03:04.669 00:39:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726391 00:03:04.669 00:39:51 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726391 00:03:04.669 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726391_collect-vmstat.pm.log 00:03:04.669 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726391_collect-cpu-load.pm.log 00:03:04.669 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726391_collect-cpu-temp.pm.log 00:03:04.927 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726391_collect-bmc-pm.bmc.pm.log 00:03:05.859 00:39:52 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:03:05.859 00:39:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:05.859 00:39:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:05.859 00:39:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:03:05.859 00:39:52 -- spdk/autobuild.sh@16 -- $ date -u 00:03:05.859 Tue May 14 10:39:52 PM UTC 2024 00:03:05.859 00:39:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:05.859 v24.05-pre-624-gc06b0c79b 00:03:05.859 00:39:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:05.859 00:39:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:05.859 00:39:52 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:05.859 00:39:52 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:05.859 00:39:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.859 ************************************ 00:03:05.859 START TEST asan 00:03:05.859 ************************************ 00:03:05.859 00:39:52 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:03:05.859 using asan 00:03:05.859 00:03:05.859 real 0m0.001s 00:03:05.859 user 0m0.001s 00:03:05.859 sys 0m0.000s 00:03:05.859 00:39:52 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:05.859 00:39:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.859 ************************************ 00:03:05.859 END TEST asan 00:03:05.859 ************************************ 00:03:05.859 00:39:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:05.859 00:39:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:05.859 00:39:52 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:05.859 00:39:52 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:05.859 00:39:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.859 ************************************ 00:03:05.859 START TEST ubsan 00:03:05.859 ************************************ 00:03:05.859 00:39:52 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:03:05.859 using ubsan 00:03:05.859 00:03:05.859 real 0m0.000s 00:03:05.859 user 0m0.000s 00:03:05.859 sys 0m0.000s 00:03:05.859 00:39:52 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:05.860 00:39:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.860 ************************************ 00:03:05.860 END TEST ubsan 00:03:05.860 ************************************ 00:03:05.860 00:39:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:05.860 00:39:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:05.860 00:39:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:05.860 00:39:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:03:05.860 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:03:05.860 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:03:06.426 Using 'verbs' RDMA provider 00:03:19.177 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:29.143 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:29.143 Creating mk/config.mk...done. 00:03:29.143 Creating mk/cc.flags.mk...done. 00:03:29.143 Type 'make' to build. 00:03:29.143 00:40:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:03:29.143 00:40:15 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:29.143 00:40:15 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:29.143 00:40:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.143 ************************************ 00:03:29.143 START TEST make 00:03:29.143 ************************************ 00:03:29.143 00:40:15 make -- common/autotest_common.sh@1121 -- $ make -j128 00:03:29.143 make[1]: Nothing to be done for 'all'. 00:03:34.423 The Meson build system 00:03:34.423 Version: 1.3.1 00:03:34.423 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:03:34.423 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:03:34.423 Build type: native build 00:03:34.423 Program cat found: YES (/usr/bin/cat) 00:03:34.423 Project name: DPDK 00:03:34.423 Project version: 23.11.0 00:03:34.423 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:34.423 C linker for the host machine: cc ld.bfd 2.39-16 00:03:34.423 Host machine cpu family: x86_64 00:03:34.423 Host machine cpu: x86_64 00:03:34.423 Message: ## Building in Developer Mode ## 00:03:34.423 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:34.423 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:34.423 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:34.423 Program python3 found: YES (/usr/bin/python3) 00:03:34.423 Program cat found: YES (/usr/bin/cat) 00:03:34.423 Compiler for C supports arguments -march=native: YES 00:03:34.423 Checking for size of "void *" : 8 00:03:34.423 Checking for size of "void *" : 8 (cached) 00:03:34.423 Library m found: YES 00:03:34.423 Library numa found: YES 00:03:34.423 Has header "numaif.h" : YES 00:03:34.423 Library fdt found: NO 00:03:34.423 Library execinfo found: NO 00:03:34.423 Has header "execinfo.h" : YES 00:03:34.423 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:34.423 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:34.423 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:34.423 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:34.423 Run-time dependency openssl found: YES 3.0.9 00:03:34.423 Run-time dependency libpcap found: YES 1.10.4 00:03:34.423 Has header "pcap.h" with dependency libpcap: YES 00:03:34.423 Compiler for C supports arguments -Wcast-qual: YES 00:03:34.423 Compiler for C supports arguments -Wdeprecated: YES 00:03:34.423 Compiler for C supports arguments -Wformat: YES 00:03:34.423 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:34.423 Compiler for C supports arguments -Wformat-security: NO 00:03:34.423 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.423 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:34.423 Compiler for C supports arguments -Wnested-externs: YES 00:03:34.423 Compiler for C supports arguments -Wold-style-definition: YES 00:03:34.423 Compiler for C supports arguments -Wpointer-arith: YES 00:03:34.423 Compiler for C supports arguments -Wsign-compare: YES 00:03:34.423 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:34.423 Compiler for C supports arguments -Wundef: YES 00:03:34.423 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.424 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:34.424 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:34.424 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.424 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:34.424 Program objdump found: YES (/usr/bin/objdump) 00:03:34.424 Compiler for C supports arguments -mavx512f: YES 00:03:34.424 Checking if "AVX512 checking" compiles: YES 00:03:34.424 Fetching value of define "__SSE4_2__" : 1 00:03:34.424 Fetching value of define "__AES__" : 1 00:03:34.424 Fetching value of define "__AVX__" : 1 00:03:34.424 Fetching value of define "__AVX2__" : 1 00:03:34.424 Fetching value of define "__AVX512BW__" : 1 00:03:34.424 Fetching value of define "__AVX512CD__" : 1 00:03:34.424 Fetching value of define "__AVX512DQ__" : 1 00:03:34.424 Fetching value of define "__AVX512F__" : 1 00:03:34.424 Fetching value of define "__AVX512VL__" : 1 00:03:34.424 Fetching value of define "__PCLMUL__" : 1 00:03:34.424 Fetching value of define "__RDRND__" : 1 00:03:34.424 Fetching value of define "__RDSEED__" : 1 00:03:34.424 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:34.424 Fetching value of define "__znver1__" : (undefined) 00:03:34.424 Fetching value of define "__znver2__" : (undefined) 00:03:34.424 Fetching value of define "__znver3__" : (undefined) 00:03:34.424 Fetching value of define "__znver4__" : (undefined) 00:03:34.424 Library asan found: YES 00:03:34.424 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:34.424 Message: lib/log: Defining dependency "log" 00:03:34.424 Message: lib/kvargs: Defining dependency "kvargs" 00:03:34.424 Message: lib/telemetry: Defining dependency "telemetry" 00:03:34.424 Library rt found: YES 00:03:34.424 Checking for function "getentropy" : NO 00:03:34.424 Message: lib/eal: Defining dependency "eal" 00:03:34.424 Message: lib/ring: Defining dependency "ring" 00:03:34.424 Message: lib/rcu: Defining dependency "rcu" 00:03:34.424 Message: lib/mempool: Defining dependency "mempool" 00:03:34.424 Message: lib/mbuf: Defining dependency "mbuf" 00:03:34.424 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:34.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:34.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:34.424 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:34.424 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:34.424 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:34.424 Compiler for C supports arguments -mpclmul: YES 00:03:34.424 Compiler for C supports arguments -maes: YES 00:03:34.424 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:34.424 Compiler for C supports arguments -mavx512bw: YES 00:03:34.424 Compiler for C supports arguments -mavx512dq: YES 00:03:34.424 Compiler for C supports arguments -mavx512vl: YES 00:03:34.424 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:34.424 Compiler for C supports arguments -mavx2: YES 00:03:34.424 Compiler for C supports arguments -mavx: YES 00:03:34.424 Message: lib/net: Defining dependency "net" 00:03:34.424 Message: lib/meter: Defining dependency "meter" 00:03:34.424 Message: lib/ethdev: Defining dependency "ethdev" 00:03:34.424 Message: lib/pci: Defining dependency "pci" 00:03:34.424 Message: lib/cmdline: Defining dependency "cmdline" 00:03:34.424 Message: lib/hash: Defining dependency "hash" 00:03:34.424 Message: lib/timer: Defining dependency "timer" 00:03:34.424 Message: lib/compressdev: Defining dependency "compressdev" 00:03:34.424 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:34.424 Message: lib/dmadev: Defining dependency "dmadev" 00:03:34.424 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:34.424 Message: lib/power: Defining dependency "power" 00:03:34.424 Message: lib/reorder: Defining dependency "reorder" 00:03:34.424 Message: lib/security: Defining dependency "security" 00:03:34.424 Has header "linux/userfaultfd.h" : YES 00:03:34.424 Has header "linux/vduse.h" : YES 00:03:34.424 Message: lib/vhost: Defining dependency "vhost" 00:03:34.424 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:34.424 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:34.424 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:34.424 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:34.424 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:34.424 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:34.424 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:34.424 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:34.424 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:34.424 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:34.424 Program doxygen found: YES (/usr/bin/doxygen) 00:03:34.424 Configuring doxy-api-html.conf using configuration 00:03:34.424 Configuring doxy-api-man.conf using configuration 00:03:34.424 Program mandb found: YES (/usr/bin/mandb) 00:03:34.424 Program sphinx-build found: NO 00:03:34.424 Configuring rte_build_config.h using configuration 00:03:34.424 Message: 00:03:34.424 ================= 00:03:34.424 Applications Enabled 00:03:34.424 ================= 00:03:34.424 00:03:34.424 apps: 00:03:34.424 00:03:34.424 00:03:34.424 Message: 00:03:34.424 ================= 00:03:34.424 Libraries Enabled 00:03:34.424 ================= 00:03:34.424 00:03:34.424 libs: 00:03:34.424 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:34.424 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:34.424 cryptodev, dmadev, power, reorder, security, vhost, 00:03:34.424 00:03:34.424 Message: 00:03:34.424 =============== 00:03:34.424 Drivers Enabled 00:03:34.424 =============== 00:03:34.424 00:03:34.424 common: 00:03:34.424 00:03:34.424 bus: 00:03:34.424 pci, vdev, 00:03:34.424 mempool: 00:03:34.424 ring, 00:03:34.424 dma: 00:03:34.424 00:03:34.424 net: 00:03:34.424 00:03:34.424 crypto: 00:03:34.424 00:03:34.424 compress: 00:03:34.424 00:03:34.424 vdpa: 00:03:34.424 00:03:34.424 00:03:34.424 Message: 00:03:34.424 ================= 00:03:34.424 Content Skipped 00:03:34.424 ================= 00:03:34.424 00:03:34.424 apps: 00:03:34.424 dumpcap: explicitly disabled via build config 00:03:34.424 graph: explicitly disabled via build config 00:03:34.424 pdump: explicitly disabled via build config 00:03:34.424 proc-info: explicitly disabled via build config 00:03:34.424 test-acl: explicitly disabled via build config 00:03:34.424 test-bbdev: explicitly disabled via build config 00:03:34.424 test-cmdline: explicitly disabled via build config 00:03:34.424 test-compress-perf: explicitly disabled via build config 00:03:34.424 test-crypto-perf: explicitly disabled via build config 00:03:34.424 test-dma-perf: explicitly disabled via build config 00:03:34.424 test-eventdev: explicitly disabled via build config 00:03:34.424 test-fib: explicitly disabled via build config 00:03:34.424 test-flow-perf: explicitly disabled via build config 00:03:34.424 test-gpudev: explicitly disabled via build config 00:03:34.424 test-mldev: explicitly disabled via build config 00:03:34.424 test-pipeline: explicitly disabled via build config 00:03:34.424 test-pmd: explicitly disabled via build config 00:03:34.424 test-regex: explicitly disabled via build config 00:03:34.424 test-sad: explicitly disabled via build config 00:03:34.424 test-security-perf: explicitly disabled via build config 00:03:34.424 00:03:34.424 libs: 00:03:34.424 metrics: explicitly disabled via build config 00:03:34.424 acl: explicitly disabled via build config 00:03:34.424 bbdev: explicitly disabled via build config 00:03:34.424 bitratestats: explicitly disabled via build config 00:03:34.424 bpf: explicitly disabled via build config 00:03:34.424 cfgfile: explicitly disabled via build config 00:03:34.424 distributor: explicitly disabled via build config 00:03:34.424 efd: explicitly disabled via build config 00:03:34.424 eventdev: explicitly disabled via build config 00:03:34.424 dispatcher: explicitly disabled via build config 00:03:34.424 gpudev: explicitly disabled via build config 00:03:34.424 gro: explicitly disabled via build config 00:03:34.424 gso: explicitly disabled via build config 00:03:34.424 ip_frag: explicitly disabled via build config 00:03:34.424 jobstats: explicitly disabled via build config 00:03:34.424 latencystats: explicitly disabled via build config 00:03:34.424 lpm: explicitly disabled via build config 00:03:34.424 member: explicitly disabled via build config 00:03:34.424 pcapng: explicitly disabled via build config 00:03:34.424 rawdev: explicitly disabled via build config 00:03:34.424 regexdev: explicitly disabled via build config 00:03:34.424 mldev: explicitly disabled via build config 00:03:34.424 rib: explicitly disabled via build config 00:03:34.424 sched: explicitly disabled via build config 00:03:34.424 stack: explicitly disabled via build config 00:03:34.424 ipsec: explicitly disabled via build config 00:03:34.424 pdcp: explicitly disabled via build config 00:03:34.424 fib: explicitly disabled via build config 00:03:34.424 port: explicitly disabled via build config 00:03:34.424 pdump: explicitly disabled via build config 00:03:34.424 table: explicitly disabled via build config 00:03:34.424 pipeline: explicitly disabled via build config 00:03:34.424 graph: explicitly disabled via build config 00:03:34.424 node: explicitly disabled via build config 00:03:34.424 00:03:34.424 drivers: 00:03:34.424 common/cpt: not in enabled drivers build config 00:03:34.424 common/dpaax: not in enabled drivers build config 00:03:34.424 common/iavf: not in enabled drivers build config 00:03:34.424 common/idpf: not in enabled drivers build config 00:03:34.424 common/mvep: not in enabled drivers build config 00:03:34.424 common/octeontx: not in enabled drivers build config 00:03:34.424 bus/auxiliary: not in enabled drivers build config 00:03:34.424 bus/cdx: not in enabled drivers build config 00:03:34.424 bus/dpaa: not in enabled drivers build config 00:03:34.424 bus/fslmc: not in enabled drivers build config 00:03:34.424 bus/ifpga: not in enabled drivers build config 00:03:34.424 bus/platform: not in enabled drivers build config 00:03:34.424 bus/vmbus: not in enabled drivers build config 00:03:34.424 common/cnxk: not in enabled drivers build config 00:03:34.424 common/mlx5: not in enabled drivers build config 00:03:34.424 common/nfp: not in enabled drivers build config 00:03:34.424 common/qat: not in enabled drivers build config 00:03:34.424 common/sfc_efx: not in enabled drivers build config 00:03:34.424 mempool/bucket: not in enabled drivers build config 00:03:34.424 mempool/cnxk: not in enabled drivers build config 00:03:34.424 mempool/dpaa: not in enabled drivers build config 00:03:34.424 mempool/dpaa2: not in enabled drivers build config 00:03:34.424 mempool/octeontx: not in enabled drivers build config 00:03:34.424 mempool/stack: not in enabled drivers build config 00:03:34.425 dma/cnxk: not in enabled drivers build config 00:03:34.425 dma/dpaa: not in enabled drivers build config 00:03:34.425 dma/dpaa2: not in enabled drivers build config 00:03:34.425 dma/hisilicon: not in enabled drivers build config 00:03:34.425 dma/idxd: not in enabled drivers build config 00:03:34.425 dma/ioat: not in enabled drivers build config 00:03:34.425 dma/skeleton: not in enabled drivers build config 00:03:34.425 net/af_packet: not in enabled drivers build config 00:03:34.425 net/af_xdp: not in enabled drivers build config 00:03:34.425 net/ark: not in enabled drivers build config 00:03:34.425 net/atlantic: not in enabled drivers build config 00:03:34.425 net/avp: not in enabled drivers build config 00:03:34.425 net/axgbe: not in enabled drivers build config 00:03:34.425 net/bnx2x: not in enabled drivers build config 00:03:34.425 net/bnxt: not in enabled drivers build config 00:03:34.425 net/bonding: not in enabled drivers build config 00:03:34.425 net/cnxk: not in enabled drivers build config 00:03:34.425 net/cpfl: not in enabled drivers build config 00:03:34.425 net/cxgbe: not in enabled drivers build config 00:03:34.425 net/dpaa: not in enabled drivers build config 00:03:34.425 net/dpaa2: not in enabled drivers build config 00:03:34.425 net/e1000: not in enabled drivers build config 00:03:34.425 net/ena: not in enabled drivers build config 00:03:34.425 net/enetc: not in enabled drivers build config 00:03:34.425 net/enetfec: not in enabled drivers build config 00:03:34.425 net/enic: not in enabled drivers build config 00:03:34.425 net/failsafe: not in enabled drivers build config 00:03:34.425 net/fm10k: not in enabled drivers build config 00:03:34.425 net/gve: not in enabled drivers build config 00:03:34.425 net/hinic: not in enabled drivers build config 00:03:34.425 net/hns3: not in enabled drivers build config 00:03:34.425 net/i40e: not in enabled drivers build config 00:03:34.425 net/iavf: not in enabled drivers build config 00:03:34.425 net/ice: not in enabled drivers build config 00:03:34.425 net/idpf: not in enabled drivers build config 00:03:34.425 net/igc: not in enabled drivers build config 00:03:34.425 net/ionic: not in enabled drivers build config 00:03:34.425 net/ipn3ke: not in enabled drivers build config 00:03:34.425 net/ixgbe: not in enabled drivers build config 00:03:34.425 net/mana: not in enabled drivers build config 00:03:34.425 net/memif: not in enabled drivers build config 00:03:34.425 net/mlx4: not in enabled drivers build config 00:03:34.425 net/mlx5: not in enabled drivers build config 00:03:34.425 net/mvneta: not in enabled drivers build config 00:03:34.425 net/mvpp2: not in enabled drivers build config 00:03:34.425 net/netvsc: not in enabled drivers build config 00:03:34.425 net/nfb: not in enabled drivers build config 00:03:34.425 net/nfp: not in enabled drivers build config 00:03:34.425 net/ngbe: not in enabled drivers build config 00:03:34.425 net/null: not in enabled drivers build config 00:03:34.425 net/octeontx: not in enabled drivers build config 00:03:34.425 net/octeon_ep: not in enabled drivers build config 00:03:34.425 net/pcap: not in enabled drivers build config 00:03:34.425 net/pfe: not in enabled drivers build config 00:03:34.425 net/qede: not in enabled drivers build config 00:03:34.425 net/ring: not in enabled drivers build config 00:03:34.425 net/sfc: not in enabled drivers build config 00:03:34.425 net/softnic: not in enabled drivers build config 00:03:34.425 net/tap: not in enabled drivers build config 00:03:34.425 net/thunderx: not in enabled drivers build config 00:03:34.425 net/txgbe: not in enabled drivers build config 00:03:34.425 net/vdev_netvsc: not in enabled drivers build config 00:03:34.425 net/vhost: not in enabled drivers build config 00:03:34.425 net/virtio: not in enabled drivers build config 00:03:34.425 net/vmxnet3: not in enabled drivers build config 00:03:34.425 raw/*: missing internal dependency, "rawdev" 00:03:34.425 crypto/armv8: not in enabled drivers build config 00:03:34.425 crypto/bcmfs: not in enabled drivers build config 00:03:34.425 crypto/caam_jr: not in enabled drivers build config 00:03:34.425 crypto/ccp: not in enabled drivers build config 00:03:34.425 crypto/cnxk: not in enabled drivers build config 00:03:34.425 crypto/dpaa_sec: not in enabled drivers build config 00:03:34.425 crypto/dpaa2_sec: not in enabled drivers build config 00:03:34.425 crypto/ipsec_mb: not in enabled drivers build config 00:03:34.425 crypto/mlx5: not in enabled drivers build config 00:03:34.425 crypto/mvsam: not in enabled drivers build config 00:03:34.425 crypto/nitrox: not in enabled drivers build config 00:03:34.425 crypto/null: not in enabled drivers build config 00:03:34.425 crypto/octeontx: not in enabled drivers build config 00:03:34.425 crypto/openssl: not in enabled drivers build config 00:03:34.425 crypto/scheduler: not in enabled drivers build config 00:03:34.425 crypto/uadk: not in enabled drivers build config 00:03:34.425 crypto/virtio: not in enabled drivers build config 00:03:34.425 compress/isal: not in enabled drivers build config 00:03:34.425 compress/mlx5: not in enabled drivers build config 00:03:34.425 compress/octeontx: not in enabled drivers build config 00:03:34.425 compress/zlib: not in enabled drivers build config 00:03:34.425 regex/*: missing internal dependency, "regexdev" 00:03:34.425 ml/*: missing internal dependency, "mldev" 00:03:34.425 vdpa/ifc: not in enabled drivers build config 00:03:34.425 vdpa/mlx5: not in enabled drivers build config 00:03:34.425 vdpa/nfp: not in enabled drivers build config 00:03:34.425 vdpa/sfc: not in enabled drivers build config 00:03:34.425 event/*: missing internal dependency, "eventdev" 00:03:34.425 baseband/*: missing internal dependency, "bbdev" 00:03:34.425 gpu/*: missing internal dependency, "gpudev" 00:03:34.425 00:03:34.425 00:03:34.684 Build targets in project: 84 00:03:34.684 00:03:34.684 DPDK 23.11.0 00:03:34.684 00:03:34.684 User defined options 00:03:34.684 buildtype : debug 00:03:34.684 default_library : shared 00:03:34.684 libdir : lib 00:03:34.684 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:03:34.684 b_sanitize : address 00:03:34.684 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:34.684 c_link_args : 00:03:34.684 cpu_instruction_set: native 00:03:34.684 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:34.684 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:34.684 enable_docs : false 00:03:34.684 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:34.684 enable_kmods : false 00:03:34.684 tests : false 00:03:34.684 00:03:34.684 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.948 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:03:35.216 [1/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:35.216 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:35.216 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:35.216 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:35.216 [5/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:35.216 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:35.216 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:35.216 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:35.216 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:35.216 [10/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:35.216 [11/264] Linking static target lib/librte_kvargs.a 00:03:35.216 [12/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:35.216 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:35.216 [14/264] Linking static target lib/librte_log.a 00:03:35.216 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:35.216 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:35.216 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:35.216 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:35.481 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:35.481 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:35.481 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:35.481 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:35.481 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:35.481 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:35.481 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:35.481 [26/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:35.481 [27/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.481 [28/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:35.481 [29/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:35.481 [30/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:35.481 [31/264] Linking static target lib/librte_pci.a 00:03:35.481 [32/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:35.481 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:35.481 [34/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.481 [35/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.481 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:35.481 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:35.481 [38/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:35.740 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:35.740 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:35.740 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:35.740 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:35.740 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:35.740 [44/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:35.740 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:35.740 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:35.740 [47/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:35.740 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:35.740 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:35.740 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:35.740 [51/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:35.740 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:35.740 [53/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:35.740 [54/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:35.740 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:35.740 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:35.740 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:35.740 [58/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:35.740 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:35.740 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:35.740 [61/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.740 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:35.740 [63/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:35.740 [64/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:35.740 [65/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:35.740 [66/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:35.740 [67/264] Linking static target lib/librte_timer.a 00:03:35.740 [68/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:35.740 [69/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:35.740 [70/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:35.740 [71/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:35.740 [72/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:35.740 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:35.740 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:35.740 [75/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:35.740 [76/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:35.740 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:35.740 [78/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:35.740 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:35.740 [80/264] Linking static target lib/librte_ring.a 00:03:35.740 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:35.740 [82/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:35.740 [83/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.740 [84/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:35.740 [85/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:35.740 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:35.740 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:35.740 [88/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:35.740 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:35.740 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:35.998 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:35.998 [92/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:35.998 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:35.998 [94/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:35.998 [95/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.998 [96/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:35.998 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:35.998 [98/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:35.998 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:35.998 [100/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:35.998 [101/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:35.998 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:35.998 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:35.998 [104/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:35.998 [105/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:35.998 [106/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:35.998 [107/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:35.998 [108/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:35.998 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:35.998 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:35.998 [111/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:35.998 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:35.998 [113/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:35.998 [114/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:35.998 [115/264] Linking static target lib/librte_meter.a 00:03:35.998 [116/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:35.998 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:35.998 [118/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:35.998 [119/264] Linking static target lib/librte_cmdline.a 00:03:35.998 [120/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:35.998 [121/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:35.998 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:35.998 [123/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:35.998 [124/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:35.998 [125/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:35.998 [126/264] Linking static target lib/librte_telemetry.a 00:03:35.998 [127/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:35.998 [128/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:35.998 [129/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:35.998 [130/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:35.998 [131/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.998 [132/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:35.998 [133/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:35.998 [134/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:35.998 [135/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:35.998 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:35.998 [137/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:35.998 [138/264] Linking target lib/librte_log.so.24.0 00:03:35.998 [139/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:35.998 [140/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.998 [141/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.998 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:35.998 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:35.998 [144/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:35.998 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:35.998 [146/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:35.998 [147/264] Linking static target lib/librte_power.a 00:03:35.998 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:35.998 [149/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:35.998 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:35.998 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:35.998 [152/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:35.998 [153/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:35.998 [154/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:35.998 [155/264] Linking static target lib/librte_net.a 00:03:35.998 [156/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.998 [157/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:35.998 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:35.998 [159/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:35.998 [160/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:35.998 [161/264] Linking static target lib/librte_compressdev.a 00:03:36.256 [162/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:36.256 [163/264] Linking target lib/librte_kvargs.so.24.0 00:03:36.256 [164/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:36.256 [165/264] Linking static target lib/librte_eal.a 00:03:36.256 [166/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:36.256 [167/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:36.256 [168/264] Linking static target lib/librte_dmadev.a 00:03:36.256 [169/264] Linking static target lib/librte_mempool.a 00:03:36.256 [170/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:36.256 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:36.256 [172/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:36.256 [173/264] Linking static target lib/librte_rcu.a 00:03:36.256 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:36.256 [175/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:36.256 [176/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:36.256 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:36.256 [178/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:36.256 [179/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:36.256 [180/264] Linking static target lib/librte_reorder.a 00:03:36.256 [181/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:36.256 [182/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.256 [183/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.256 [184/264] Linking static target drivers/librte_bus_vdev.a 00:03:36.256 [185/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:36.256 [186/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:36.256 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:36.256 [188/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.256 [189/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.256 [190/264] Linking target lib/librte_telemetry.so.24.0 00:03:36.256 [191/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:36.256 [192/264] Linking static target lib/librte_security.a 00:03:36.256 [193/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:36.256 [194/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:36.256 [195/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.256 [196/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.256 [197/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:36.513 [198/264] Linking static target drivers/librte_bus_pci.a 00:03:36.513 [199/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [200/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.513 [201/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.513 [202/264] Linking static target drivers/librte_mempool_ring.a 00:03:36.513 [203/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:36.513 [204/264] Linking static target lib/librte_mbuf.a 00:03:36.513 [205/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [206/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [207/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [208/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:36.513 [209/264] Linking static target lib/librte_hash.a 00:03:36.513 [210/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [211/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.513 [212/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.770 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.770 [214/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.770 [215/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:36.770 [216/264] Linking static target lib/librte_cryptodev.a 00:03:36.770 [217/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.770 [218/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:36.770 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.027 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.328 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:37.328 [222/264] Linking static target lib/librte_ethdev.a 00:03:37.646 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:37.904 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.429 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:40.429 [226/264] Linking static target lib/librte_vhost.a 00:03:41.362 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.733 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.733 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.733 [230/264] Linking target lib/librte_eal.so.24.0 00:03:42.733 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:42.991 [232/264] Linking target lib/librte_meter.so.24.0 00:03:42.991 [233/264] Linking target lib/librte_pci.so.24.0 00:03:42.991 [234/264] Linking target lib/librte_timer.so.24.0 00:03:42.991 [235/264] Linking target lib/librte_ring.so.24.0 00:03:42.991 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:03:42.991 [237/264] Linking target lib/librte_dmadev.so.24.0 00:03:42.991 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:42.991 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:42.991 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:42.991 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:42.991 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:42.991 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:03:42.991 [244/264] Linking target lib/librte_rcu.so.24.0 00:03:42.991 [245/264] Linking target lib/librte_mempool.so.24.0 00:03:42.991 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:42.991 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:43.249 [248/264] Linking target lib/librte_mbuf.so.24.0 00:03:43.249 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:03:43.249 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:43.249 [251/264] Linking target lib/librte_reorder.so.24.0 00:03:43.249 [252/264] Linking target lib/librte_compressdev.so.24.0 00:03:43.249 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:03:43.249 [254/264] Linking target lib/librte_net.so.24.0 00:03:43.249 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:43.249 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:43.506 [257/264] Linking target lib/librte_cmdline.so.24.0 00:03:43.506 [258/264] Linking target lib/librte_security.so.24.0 00:03:43.506 [259/264] Linking target lib/librte_hash.so.24.0 00:03:43.506 [260/264] Linking target lib/librte_ethdev.so.24.0 00:03:43.506 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:43.506 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:43.506 [263/264] Linking target lib/librte_power.so.24.0 00:03:43.506 [264/264] Linking target lib/librte_vhost.so.24.0 00:03:43.506 INFO: autodetecting backend as ninja 00:03:43.506 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:03:44.439 CC lib/ut_mock/mock.o 00:03:44.439 CC lib/log/log.o 00:03:44.439 CC lib/log/log_deprecated.o 00:03:44.439 CC lib/log/log_flags.o 00:03:44.439 CC lib/ut/ut.o 00:03:44.439 LIB libspdk_ut_mock.a 00:03:44.439 SO libspdk_ut_mock.so.6.0 00:03:44.439 LIB libspdk_log.a 00:03:44.439 LIB libspdk_ut.a 00:03:44.439 SO libspdk_log.so.7.0 00:03:44.439 SO libspdk_ut.so.2.0 00:03:44.439 SYMLINK libspdk_ut_mock.so 00:03:44.439 SYMLINK libspdk_ut.so 00:03:44.439 SYMLINK libspdk_log.so 00:03:44.697 CC lib/dma/dma.o 00:03:44.697 CXX lib/trace_parser/trace.o 00:03:44.697 CC lib/ioat/ioat.o 00:03:44.697 CC lib/util/cpuset.o 00:03:44.697 CC lib/util/base64.o 00:03:44.697 CC lib/util/bit_array.o 00:03:44.697 CC lib/util/crc32.o 00:03:44.697 CC lib/util/crc16.o 00:03:44.697 CC lib/util/dif.o 00:03:44.697 CC lib/util/fd.o 00:03:44.697 CC lib/util/crc32c.o 00:03:44.697 CC lib/util/crc32_ieee.o 00:03:44.697 CC lib/util/crc64.o 00:03:44.697 CC lib/util/iov.o 00:03:44.697 CC lib/util/file.o 00:03:44.697 CC lib/util/hexlify.o 00:03:44.697 CC lib/util/math.o 00:03:44.697 CC lib/util/pipe.o 00:03:44.697 CC lib/util/strerror_tls.o 00:03:44.697 CC lib/util/fd_group.o 00:03:44.697 CC lib/util/uuid.o 00:03:44.697 CC lib/util/string.o 00:03:44.697 CC lib/util/xor.o 00:03:44.697 CC lib/util/zipf.o 00:03:44.697 CC lib/vfio_user/host/vfio_user_pci.o 00:03:44.697 CC lib/vfio_user/host/vfio_user.o 00:03:44.955 LIB libspdk_dma.a 00:03:44.955 SO libspdk_dma.so.4.0 00:03:44.955 SYMLINK libspdk_dma.so 00:03:44.955 LIB libspdk_ioat.a 00:03:44.955 LIB libspdk_vfio_user.a 00:03:44.955 SO libspdk_ioat.so.7.0 00:03:44.955 SO libspdk_vfio_user.so.5.0 00:03:44.955 SYMLINK libspdk_ioat.so 00:03:44.955 SYMLINK libspdk_vfio_user.so 00:03:45.212 LIB libspdk_util.a 00:03:45.212 SO libspdk_util.so.9.0 00:03:45.212 LIB libspdk_trace_parser.a 00:03:45.212 SYMLINK libspdk_util.so 00:03:45.212 SO libspdk_trace_parser.so.5.0 00:03:45.471 SYMLINK libspdk_trace_parser.so 00:03:45.471 CC lib/vmd/vmd.o 00:03:45.471 CC lib/vmd/led.o 00:03:45.471 CC lib/rdma/common.o 00:03:45.471 CC lib/rdma/rdma_verbs.o 00:03:45.471 CC lib/conf/conf.o 00:03:45.471 CC lib/json/json_util.o 00:03:45.471 CC lib/json/json_parse.o 00:03:45.471 CC lib/json/json_write.o 00:03:45.471 CC lib/idxd/idxd.o 00:03:45.471 CC lib/idxd/idxd_user.o 00:03:45.471 CC lib/env_dpdk/env.o 00:03:45.471 CC lib/env_dpdk/pci.o 00:03:45.471 CC lib/env_dpdk/threads.o 00:03:45.471 CC lib/env_dpdk/memory.o 00:03:45.471 CC lib/env_dpdk/init.o 00:03:45.471 CC lib/env_dpdk/pci_ioat.o 00:03:45.471 CC lib/env_dpdk/pci_virtio.o 00:03:45.471 CC lib/env_dpdk/pci_vmd.o 00:03:45.471 CC lib/env_dpdk/pci_idxd.o 00:03:45.471 CC lib/env_dpdk/sigbus_handler.o 00:03:45.471 CC lib/env_dpdk/pci_event.o 00:03:45.471 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.471 CC lib/env_dpdk/pci_dpdk.o 00:03:45.471 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.729 LIB libspdk_rdma.a 00:03:45.729 LIB libspdk_conf.a 00:03:45.729 SO libspdk_rdma.so.6.0 00:03:45.729 LIB libspdk_json.a 00:03:45.729 SO libspdk_conf.so.6.0 00:03:45.729 SO libspdk_json.so.6.0 00:03:45.729 SYMLINK libspdk_rdma.so 00:03:45.729 SYMLINK libspdk_conf.so 00:03:45.729 SYMLINK libspdk_json.so 00:03:45.987 LIB libspdk_vmd.a 00:03:45.987 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.987 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.987 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.987 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.987 SO libspdk_vmd.so.6.0 00:03:45.987 SYMLINK libspdk_vmd.so 00:03:46.245 LIB libspdk_idxd.a 00:03:46.245 SO libspdk_idxd.so.12.0 00:03:46.245 SYMLINK libspdk_idxd.so 00:03:46.245 LIB libspdk_jsonrpc.a 00:03:46.245 SO libspdk_jsonrpc.so.6.0 00:03:46.512 SYMLINK libspdk_jsonrpc.so 00:03:46.512 CC lib/rpc/rpc.o 00:03:46.775 LIB libspdk_rpc.a 00:03:46.775 SO libspdk_rpc.so.6.0 00:03:46.775 SYMLINK libspdk_rpc.so 00:03:47.032 CC lib/keyring/keyring.o 00:03:47.032 CC lib/keyring/keyring_rpc.o 00:03:47.032 CC lib/trace/trace.o 00:03:47.032 CC lib/trace/trace_flags.o 00:03:47.032 CC lib/trace/trace_rpc.o 00:03:47.032 CC lib/notify/notify.o 00:03:47.032 CC lib/notify/notify_rpc.o 00:03:47.032 LIB libspdk_env_dpdk.a 00:03:47.032 LIB libspdk_notify.a 00:03:47.032 LIB libspdk_trace.a 00:03:47.290 SO libspdk_notify.so.6.0 00:03:47.290 SO libspdk_env_dpdk.so.14.0 00:03:47.290 SO libspdk_trace.so.10.0 00:03:47.290 LIB libspdk_keyring.a 00:03:47.290 SYMLINK libspdk_notify.so 00:03:47.290 SYMLINK libspdk_trace.so 00:03:47.290 SO libspdk_keyring.so.1.0 00:03:47.290 SYMLINK libspdk_keyring.so 00:03:47.290 SYMLINK libspdk_env_dpdk.so 00:03:47.549 CC lib/thread/thread.o 00:03:47.549 CC lib/thread/iobuf.o 00:03:47.549 CC lib/sock/sock.o 00:03:47.549 CC lib/sock/sock_rpc.o 00:03:47.807 LIB libspdk_sock.a 00:03:48.065 SO libspdk_sock.so.9.0 00:03:48.065 SYMLINK libspdk_sock.so 00:03:48.322 CC lib/nvme/nvme_ctrlr.o 00:03:48.322 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.322 CC lib/nvme/nvme_fabric.o 00:03:48.322 CC lib/nvme/nvme_ns.o 00:03:48.322 CC lib/nvme/nvme_pcie_common.o 00:03:48.322 CC lib/nvme/nvme_ns_cmd.o 00:03:48.322 CC lib/nvme/nvme.o 00:03:48.322 CC lib/nvme/nvme_pcie.o 00:03:48.322 CC lib/nvme/nvme_qpair.o 00:03:48.322 CC lib/nvme/nvme_quirks.o 00:03:48.322 CC lib/nvme/nvme_transport.o 00:03:48.322 CC lib/nvme/nvme_discovery.o 00:03:48.322 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:48.322 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:48.322 CC lib/nvme/nvme_tcp.o 00:03:48.322 CC lib/nvme/nvme_opal.o 00:03:48.322 CC lib/nvme/nvme_poll_group.o 00:03:48.322 CC lib/nvme/nvme_io_msg.o 00:03:48.322 CC lib/nvme/nvme_stubs.o 00:03:48.322 CC lib/nvme/nvme_zns.o 00:03:48.322 CC lib/nvme/nvme_auth.o 00:03:48.322 CC lib/nvme/nvme_cuse.o 00:03:48.322 CC lib/nvme/nvme_rdma.o 00:03:49.256 LIB libspdk_thread.a 00:03:49.256 SO libspdk_thread.so.10.0 00:03:49.256 SYMLINK libspdk_thread.so 00:03:49.515 CC lib/virtio/virtio.o 00:03:49.515 CC lib/blob/blobstore.o 00:03:49.515 CC lib/blob/request.o 00:03:49.515 CC lib/virtio/virtio_vhost_user.o 00:03:49.515 CC lib/blob/zeroes.o 00:03:49.515 CC lib/virtio/virtio_pci.o 00:03:49.515 CC lib/virtio/virtio_vfio_user.o 00:03:49.515 CC lib/blob/blob_bs_dev.o 00:03:49.515 CC lib/init/json_config.o 00:03:49.515 CC lib/init/subsystem.o 00:03:49.515 CC lib/init/subsystem_rpc.o 00:03:49.515 CC lib/init/rpc.o 00:03:49.515 CC lib/accel/accel_rpc.o 00:03:49.515 CC lib/accel/accel.o 00:03:49.515 CC lib/accel/accel_sw.o 00:03:49.515 LIB libspdk_init.a 00:03:49.515 SO libspdk_init.so.5.0 00:03:49.773 SYMLINK libspdk_init.so 00:03:49.773 LIB libspdk_virtio.a 00:03:49.773 SO libspdk_virtio.so.7.0 00:03:49.773 SYMLINK libspdk_virtio.so 00:03:50.032 CC lib/event/app.o 00:03:50.032 CC lib/event/reactor.o 00:03:50.032 CC lib/event/scheduler_static.o 00:03:50.032 CC lib/event/log_rpc.o 00:03:50.032 CC lib/event/app_rpc.o 00:03:50.032 LIB libspdk_nvme.a 00:03:50.292 SO libspdk_nvme.so.13.0 00:03:50.292 LIB libspdk_event.a 00:03:50.292 SO libspdk_event.so.13.0 00:03:50.551 SYMLINK libspdk_event.so 00:03:50.551 SYMLINK libspdk_nvme.so 00:03:50.551 LIB libspdk_accel.a 00:03:50.551 SO libspdk_accel.so.15.0 00:03:50.810 SYMLINK libspdk_accel.so 00:03:50.810 CC lib/bdev/bdev.o 00:03:50.810 CC lib/bdev/part.o 00:03:50.810 CC lib/bdev/bdev_rpc.o 00:03:50.810 CC lib/bdev/bdev_zone.o 00:03:50.810 CC lib/bdev/scsi_nvme.o 00:03:52.185 LIB libspdk_blob.a 00:03:52.185 SO libspdk_blob.so.11.0 00:03:52.185 SYMLINK libspdk_blob.so 00:03:52.443 CC lib/lvol/lvol.o 00:03:52.443 CC lib/blobfs/tree.o 00:03:52.443 CC lib/blobfs/blobfs.o 00:03:53.377 LIB libspdk_blobfs.a 00:03:53.377 SO libspdk_blobfs.so.10.0 00:03:53.636 LIB libspdk_lvol.a 00:03:53.636 SYMLINK libspdk_blobfs.so 00:03:53.636 SO libspdk_lvol.so.10.0 00:03:53.636 LIB libspdk_bdev.a 00:03:53.636 SO libspdk_bdev.so.15.0 00:03:53.636 SYMLINK libspdk_lvol.so 00:03:53.636 SYMLINK libspdk_bdev.so 00:03:53.895 CC lib/nvmf/subsystem.o 00:03:53.895 CC lib/nvmf/ctrlr.o 00:03:53.895 CC lib/nvmf/ctrlr_discovery.o 00:03:53.895 CC lib/nvmf/ctrlr_bdev.o 00:03:53.895 CC lib/nvmf/nvmf_rpc.o 00:03:53.895 CC lib/nvmf/nvmf.o 00:03:53.895 CC lib/nvmf/transport.o 00:03:53.895 CC lib/nvmf/stubs.o 00:03:53.895 CC lib/nvmf/rdma.o 00:03:53.895 CC lib/nvmf/tcp.o 00:03:53.895 CC lib/nvmf/auth.o 00:03:53.895 CC lib/ublk/ublk.o 00:03:53.895 CC lib/ublk/ublk_rpc.o 00:03:53.895 CC lib/ftl/ftl_core.o 00:03:53.895 CC lib/ftl/ftl_init.o 00:03:53.895 CC lib/ftl/ftl_io.o 00:03:53.895 CC lib/ftl/ftl_sb.o 00:03:53.895 CC lib/ftl/ftl_layout.o 00:03:53.895 CC lib/ftl/ftl_debug.o 00:03:53.895 CC lib/ftl/ftl_l2p.o 00:03:53.895 CC lib/ftl/ftl_band_ops.o 00:03:53.895 CC lib/ftl/ftl_band.o 00:03:53.895 CC lib/ftl/ftl_l2p_flat.o 00:03:53.895 CC lib/ftl/ftl_nv_cache.o 00:03:53.895 CC lib/ftl/ftl_rq.o 00:03:53.895 CC lib/ftl/ftl_writer.o 00:03:53.895 CC lib/nbd/nbd.o 00:03:53.895 CC lib/nbd/nbd_rpc.o 00:03:53.895 CC lib/scsi/dev.o 00:03:53.895 CC lib/ftl/ftl_reloc.o 00:03:53.895 CC lib/scsi/lun.o 00:03:53.895 CC lib/ftl/ftl_p2l.o 00:03:53.895 CC lib/ftl/ftl_l2p_cache.o 00:03:53.895 CC lib/scsi/scsi.o 00:03:53.895 CC lib/scsi/port.o 00:03:53.895 CC lib/scsi/scsi_bdev.o 00:03:53.895 CC lib/scsi/scsi_rpc.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:53.895 CC lib/scsi/scsi_pr.o 00:03:53.895 CC lib/scsi/task.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.895 CC lib/ftl/utils/ftl_conf.o 00:03:53.895 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.895 CC lib/ftl/utils/ftl_md.o 00:03:53.895 CC lib/ftl/utils/ftl_mempool.o 00:03:53.895 CC lib/ftl/utils/ftl_property.o 00:03:53.895 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.895 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.895 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:54.156 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:54.156 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:54.156 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:54.156 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:54.156 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:54.156 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:54.156 CC lib/ftl/base/ftl_base_dev.o 00:03:54.156 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:54.156 CC lib/ftl/ftl_trace.o 00:03:54.156 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:54.156 CC lib/ftl/base/ftl_base_bdev.o 00:03:54.723 LIB libspdk_ublk.a 00:03:54.723 LIB libspdk_nbd.a 00:03:54.723 SO libspdk_ublk.so.3.0 00:03:54.723 SO libspdk_nbd.so.7.0 00:03:54.723 SYMLINK libspdk_nbd.so 00:03:54.723 SYMLINK libspdk_ublk.so 00:03:54.982 LIB libspdk_scsi.a 00:03:54.982 SO libspdk_scsi.so.9.0 00:03:54.982 SYMLINK libspdk_scsi.so 00:03:55.240 LIB libspdk_ftl.a 00:03:55.240 CC lib/iscsi/conn.o 00:03:55.240 CC lib/iscsi/init_grp.o 00:03:55.240 CC lib/iscsi/iscsi.o 00:03:55.240 CC lib/iscsi/md5.o 00:03:55.240 CC lib/vhost/vhost.o 00:03:55.240 CC lib/iscsi/param.o 00:03:55.240 CC lib/vhost/vhost_rpc.o 00:03:55.240 CC lib/vhost/vhost_scsi.o 00:03:55.240 CC lib/iscsi/portal_grp.o 00:03:55.240 CC lib/iscsi/tgt_node.o 00:03:55.240 CC lib/iscsi/task.o 00:03:55.240 CC lib/vhost/vhost_blk.o 00:03:55.240 CC lib/iscsi/iscsi_subsystem.o 00:03:55.240 CC lib/vhost/rte_vhost_user.o 00:03:55.240 CC lib/iscsi/iscsi_rpc.o 00:03:55.240 SO libspdk_ftl.so.9.0 00:03:55.498 SYMLINK libspdk_ftl.so 00:03:56.433 LIB libspdk_nvmf.a 00:03:56.433 LIB libspdk_vhost.a 00:03:56.433 SO libspdk_nvmf.so.18.0 00:03:56.433 SO libspdk_vhost.so.8.0 00:03:56.433 SYMLINK libspdk_vhost.so 00:03:56.692 SYMLINK libspdk_nvmf.so 00:03:56.692 LIB libspdk_iscsi.a 00:03:56.951 SO libspdk_iscsi.so.8.0 00:03:56.951 SYMLINK libspdk_iscsi.so 00:03:57.209 CC module/env_dpdk/env_dpdk_rpc.o 00:03:57.466 CC module/blob/bdev/blob_bdev.o 00:03:57.466 CC module/accel/dsa/accel_dsa.o 00:03:57.466 CC module/accel/dsa/accel_dsa_rpc.o 00:03:57.466 CC module/accel/error/accel_error.o 00:03:57.466 CC module/sock/posix/posix.o 00:03:57.466 CC module/accel/error/accel_error_rpc.o 00:03:57.466 CC module/scheduler/gscheduler/gscheduler.o 00:03:57.466 CC module/accel/ioat/accel_ioat.o 00:03:57.466 CC module/accel/ioat/accel_ioat_rpc.o 00:03:57.466 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:57.466 CC module/accel/iaa/accel_iaa.o 00:03:57.466 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:57.466 CC module/accel/iaa/accel_iaa_rpc.o 00:03:57.466 CC module/keyring/file/keyring.o 00:03:57.466 CC module/keyring/file/keyring_rpc.o 00:03:57.466 LIB libspdk_env_dpdk_rpc.a 00:03:57.466 SO libspdk_env_dpdk_rpc.so.6.0 00:03:57.466 LIB libspdk_scheduler_gscheduler.a 00:03:57.466 SYMLINK libspdk_env_dpdk_rpc.so 00:03:57.466 LIB libspdk_accel_error.a 00:03:57.466 SO libspdk_scheduler_gscheduler.so.4.0 00:03:57.466 LIB libspdk_accel_ioat.a 00:03:57.466 SO libspdk_accel_error.so.2.0 00:03:57.466 LIB libspdk_keyring_file.a 00:03:57.467 LIB libspdk_scheduler_dpdk_governor.a 00:03:57.467 SO libspdk_accel_ioat.so.6.0 00:03:57.467 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:57.467 SO libspdk_keyring_file.so.1.0 00:03:57.467 LIB libspdk_blob_bdev.a 00:03:57.467 SYMLINK libspdk_scheduler_gscheduler.so 00:03:57.467 LIB libspdk_scheduler_dynamic.a 00:03:57.724 SYMLINK libspdk_accel_error.so 00:03:57.724 SO libspdk_blob_bdev.so.11.0 00:03:57.724 SO libspdk_scheduler_dynamic.so.4.0 00:03:57.724 SYMLINK libspdk_accel_ioat.so 00:03:57.724 LIB libspdk_accel_iaa.a 00:03:57.724 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:57.724 SYMLINK libspdk_keyring_file.so 00:03:57.724 SO libspdk_accel_iaa.so.3.0 00:03:57.724 LIB libspdk_accel_dsa.a 00:03:57.724 SYMLINK libspdk_blob_bdev.so 00:03:57.724 SYMLINK libspdk_scheduler_dynamic.so 00:03:57.724 SO libspdk_accel_dsa.so.5.0 00:03:57.724 SYMLINK libspdk_accel_iaa.so 00:03:57.724 SYMLINK libspdk_accel_dsa.so 00:03:58.036 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:58.036 CC module/bdev/nvme/bdev_nvme.o 00:03:58.036 CC module/blobfs/bdev/blobfs_bdev.o 00:03:58.036 CC module/bdev/nvme/nvme_rpc.o 00:03:58.036 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:58.036 CC module/bdev/nvme/bdev_mdns_client.o 00:03:58.036 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:58.036 CC module/bdev/nvme/vbdev_opal.o 00:03:58.036 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:58.036 CC module/bdev/split/vbdev_split.o 00:03:58.036 CC module/bdev/delay/vbdev_delay.o 00:03:58.036 CC module/bdev/null/bdev_null.o 00:03:58.036 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:58.036 CC module/bdev/null/bdev_null_rpc.o 00:03:58.036 CC module/bdev/gpt/gpt.o 00:03:58.036 CC module/bdev/raid/bdev_raid.o 00:03:58.036 CC module/bdev/split/vbdev_split_rpc.o 00:03:58.036 CC module/bdev/raid/bdev_raid_rpc.o 00:03:58.036 CC module/bdev/iscsi/bdev_iscsi.o 00:03:58.036 CC module/bdev/gpt/vbdev_gpt.o 00:03:58.036 CC module/bdev/aio/bdev_aio.o 00:03:58.036 CC module/bdev/raid/bdev_raid_sb.o 00:03:58.036 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:58.036 CC module/bdev/raid/raid1.o 00:03:58.036 CC module/bdev/aio/bdev_aio_rpc.o 00:03:58.036 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:58.036 CC module/bdev/error/vbdev_error.o 00:03:58.036 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:58.036 CC module/bdev/raid/raid0.o 00:03:58.036 CC module/bdev/lvol/vbdev_lvol.o 00:03:58.036 CC module/bdev/raid/concat.o 00:03:58.036 CC module/bdev/error/vbdev_error_rpc.o 00:03:58.036 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:58.036 CC module/bdev/malloc/bdev_malloc.o 00:03:58.036 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:58.036 CC module/bdev/ftl/bdev_ftl.o 00:03:58.036 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:58.036 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:58.036 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:58.036 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:58.036 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:58.036 CC module/bdev/passthru/vbdev_passthru.o 00:03:58.330 LIB libspdk_blobfs_bdev.a 00:03:58.330 SO libspdk_blobfs_bdev.so.6.0 00:03:58.330 LIB libspdk_sock_posix.a 00:03:58.330 LIB libspdk_bdev_split.a 00:03:58.330 SO libspdk_sock_posix.so.6.0 00:03:58.330 SYMLINK libspdk_blobfs_bdev.so 00:03:58.330 SO libspdk_bdev_split.so.6.0 00:03:58.330 LIB libspdk_bdev_null.a 00:03:58.330 SO libspdk_bdev_null.so.6.0 00:03:58.330 LIB libspdk_bdev_gpt.a 00:03:58.330 SYMLINK libspdk_sock_posix.so 00:03:58.330 SYMLINK libspdk_bdev_split.so 00:03:58.330 LIB libspdk_bdev_error.a 00:03:58.330 SO libspdk_bdev_gpt.so.6.0 00:03:58.330 SO libspdk_bdev_error.so.6.0 00:03:58.330 LIB libspdk_bdev_ftl.a 00:03:58.330 LIB libspdk_bdev_zone_block.a 00:03:58.330 SYMLINK libspdk_bdev_null.so 00:03:58.330 LIB libspdk_bdev_aio.a 00:03:58.330 LIB libspdk_bdev_passthru.a 00:03:58.330 SO libspdk_bdev_ftl.so.6.0 00:03:58.330 SO libspdk_bdev_zone_block.so.6.0 00:03:58.330 SO libspdk_bdev_passthru.so.6.0 00:03:58.330 LIB libspdk_bdev_delay.a 00:03:58.330 SYMLINK libspdk_bdev_gpt.so 00:03:58.330 SO libspdk_bdev_aio.so.6.0 00:03:58.330 SYMLINK libspdk_bdev_error.so 00:03:58.330 SO libspdk_bdev_delay.so.6.0 00:03:58.588 SYMLINK libspdk_bdev_ftl.so 00:03:58.588 LIB libspdk_bdev_iscsi.a 00:03:58.588 SYMLINK libspdk_bdev_zone_block.so 00:03:58.588 SYMLINK libspdk_bdev_aio.so 00:03:58.588 SYMLINK libspdk_bdev_passthru.so 00:03:58.588 LIB libspdk_bdev_malloc.a 00:03:58.588 LIB libspdk_bdev_virtio.a 00:03:58.588 SO libspdk_bdev_iscsi.so.6.0 00:03:58.588 SO libspdk_bdev_malloc.so.6.0 00:03:58.588 SYMLINK libspdk_bdev_delay.so 00:03:58.588 SO libspdk_bdev_virtio.so.6.0 00:03:58.588 SYMLINK libspdk_bdev_iscsi.so 00:03:58.588 LIB libspdk_bdev_lvol.a 00:03:58.588 SYMLINK libspdk_bdev_malloc.so 00:03:58.588 SYMLINK libspdk_bdev_virtio.so 00:03:58.588 SO libspdk_bdev_lvol.so.6.0 00:03:58.588 SYMLINK libspdk_bdev_lvol.so 00:03:59.153 LIB libspdk_bdev_raid.a 00:03:59.153 SO libspdk_bdev_raid.so.6.0 00:03:59.153 SYMLINK libspdk_bdev_raid.so 00:03:59.721 LIB libspdk_bdev_nvme.a 00:03:59.721 SO libspdk_bdev_nvme.so.7.0 00:03:59.979 SYMLINK libspdk_bdev_nvme.so 00:04:00.544 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.544 CC module/event/subsystems/keyring/keyring.o 00:04:00.544 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.544 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.544 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.544 CC module/event/subsystems/vmd/vmd.o 00:04:00.544 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.544 CC module/event/subsystems/sock/sock.o 00:04:00.544 LIB libspdk_event_vhost_blk.a 00:04:00.544 LIB libspdk_event_keyring.a 00:04:00.544 SO libspdk_event_vhost_blk.so.3.0 00:04:00.544 SO libspdk_event_keyring.so.1.0 00:04:00.544 LIB libspdk_event_scheduler.a 00:04:00.544 LIB libspdk_event_sock.a 00:04:00.544 LIB libspdk_event_vmd.a 00:04:00.544 LIB libspdk_event_iobuf.a 00:04:00.544 SYMLINK libspdk_event_keyring.so 00:04:00.544 SO libspdk_event_scheduler.so.4.0 00:04:00.544 SYMLINK libspdk_event_vhost_blk.so 00:04:00.544 SO libspdk_event_sock.so.5.0 00:04:00.544 SO libspdk_event_vmd.so.6.0 00:04:00.544 SO libspdk_event_iobuf.so.3.0 00:04:00.544 SYMLINK libspdk_event_scheduler.so 00:04:00.544 SYMLINK libspdk_event_vmd.so 00:04:00.544 SYMLINK libspdk_event_iobuf.so 00:04:00.544 SYMLINK libspdk_event_sock.so 00:04:00.802 CC module/event/subsystems/accel/accel.o 00:04:01.061 LIB libspdk_event_accel.a 00:04:01.061 SO libspdk_event_accel.so.6.0 00:04:01.061 SYMLINK libspdk_event_accel.so 00:04:01.319 CC module/event/subsystems/bdev/bdev.o 00:04:01.319 LIB libspdk_event_bdev.a 00:04:01.319 SO libspdk_event_bdev.so.6.0 00:04:01.577 SYMLINK libspdk_event_bdev.so 00:04:01.577 CC module/event/subsystems/ublk/ublk.o 00:04:01.577 CC module/event/subsystems/scsi/scsi.o 00:04:01.577 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.577 CC module/event/subsystems/nbd/nbd.o 00:04:01.577 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.836 LIB libspdk_event_ublk.a 00:04:01.836 LIB libspdk_event_nbd.a 00:04:01.836 LIB libspdk_event_scsi.a 00:04:01.836 SO libspdk_event_ublk.so.3.0 00:04:01.836 SO libspdk_event_nbd.so.6.0 00:04:01.836 SO libspdk_event_scsi.so.6.0 00:04:01.836 SYMLINK libspdk_event_ublk.so 00:04:01.836 SYMLINK libspdk_event_nbd.so 00:04:01.836 SYMLINK libspdk_event_scsi.so 00:04:01.836 LIB libspdk_event_nvmf.a 00:04:01.836 SO libspdk_event_nvmf.so.6.0 00:04:02.094 SYMLINK libspdk_event_nvmf.so 00:04:02.094 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.094 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.352 LIB libspdk_event_iscsi.a 00:04:02.352 LIB libspdk_event_vhost_scsi.a 00:04:02.352 SO libspdk_event_iscsi.so.6.0 00:04:02.352 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.352 SYMLINK libspdk_event_iscsi.so 00:04:02.352 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.352 SO libspdk.so.6.0 00:04:02.352 SYMLINK libspdk.so 00:04:02.610 CC app/trace_record/trace_record.o 00:04:02.610 CC app/spdk_lspci/spdk_lspci.o 00:04:02.610 CXX app/trace/trace.o 00:04:02.610 CC app/spdk_top/spdk_top.o 00:04:02.610 CC app/spdk_nvme_identify/identify.o 00:04:02.870 CC app/spdk_nvme_perf/perf.o 00:04:02.870 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.870 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.870 CC test/rpc_client/rpc_client_test.o 00:04:02.870 TEST_HEADER include/spdk/accel_module.h 00:04:02.870 TEST_HEADER include/spdk/accel.h 00:04:02.870 TEST_HEADER include/spdk/assert.h 00:04:02.870 TEST_HEADER include/spdk/barrier.h 00:04:02.870 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.870 TEST_HEADER include/spdk/base64.h 00:04:02.870 TEST_HEADER include/spdk/bdev.h 00:04:02.870 TEST_HEADER include/spdk/bdev_module.h 00:04:02.870 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.870 TEST_HEADER include/spdk/bit_pool.h 00:04:02.870 TEST_HEADER include/spdk/bit_array.h 00:04:02.870 CC app/nvmf_tgt/nvmf_main.o 00:04:02.870 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.870 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.870 CC app/spdk_dd/spdk_dd.o 00:04:02.870 TEST_HEADER include/spdk/blob.h 00:04:02.870 TEST_HEADER include/spdk/config.h 00:04:02.870 CC app/vhost/vhost.o 00:04:02.870 TEST_HEADER include/spdk/cpuset.h 00:04:02.870 TEST_HEADER include/spdk/conf.h 00:04:02.870 TEST_HEADER include/spdk/crc64.h 00:04:02.870 TEST_HEADER include/spdk/blobfs.h 00:04:02.870 TEST_HEADER include/spdk/crc16.h 00:04:02.870 TEST_HEADER include/spdk/crc32.h 00:04:02.870 TEST_HEADER include/spdk/dif.h 00:04:02.870 TEST_HEADER include/spdk/dma.h 00:04:02.870 TEST_HEADER include/spdk/endian.h 00:04:02.870 TEST_HEADER include/spdk/env.h 00:04:02.870 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.870 CC app/spdk_tgt/spdk_tgt.o 00:04:02.870 TEST_HEADER include/spdk/fd.h 00:04:02.870 TEST_HEADER include/spdk/fd_group.h 00:04:02.870 TEST_HEADER include/spdk/event.h 00:04:02.870 TEST_HEADER include/spdk/ftl.h 00:04:02.870 TEST_HEADER include/spdk/file.h 00:04:02.870 TEST_HEADER include/spdk/histogram_data.h 00:04:02.870 TEST_HEADER include/spdk/idxd.h 00:04:02.870 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.870 TEST_HEADER include/spdk/hexlify.h 00:04:02.870 TEST_HEADER include/spdk/init.h 00:04:02.870 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.870 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.870 TEST_HEADER include/spdk/ioat.h 00:04:02.870 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.871 TEST_HEADER include/spdk/json.h 00:04:02.871 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.871 TEST_HEADER include/spdk/keyring.h 00:04:02.871 TEST_HEADER include/spdk/keyring_module.h 00:04:02.871 TEST_HEADER include/spdk/likely.h 00:04:02.871 TEST_HEADER include/spdk/lvol.h 00:04:02.871 TEST_HEADER include/spdk/log.h 00:04:02.871 TEST_HEADER include/spdk/notify.h 00:04:02.871 TEST_HEADER include/spdk/mmio.h 00:04:02.871 TEST_HEADER include/spdk/nbd.h 00:04:02.871 TEST_HEADER include/spdk/memory.h 00:04:02.871 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.871 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.871 TEST_HEADER include/spdk/nvme.h 00:04:02.871 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.871 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.871 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.871 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.871 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.871 TEST_HEADER include/spdk/nvmf.h 00:04:02.871 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.871 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.871 TEST_HEADER include/spdk/opal.h 00:04:02.871 TEST_HEADER include/spdk/pci_ids.h 00:04:02.871 TEST_HEADER include/spdk/opal_spec.h 00:04:02.871 TEST_HEADER include/spdk/pipe.h 00:04:02.871 TEST_HEADER include/spdk/queue.h 00:04:02.871 TEST_HEADER include/spdk/reduce.h 00:04:02.871 TEST_HEADER include/spdk/rpc.h 00:04:02.871 TEST_HEADER include/spdk/scheduler.h 00:04:02.871 TEST_HEADER include/spdk/scsi.h 00:04:02.871 TEST_HEADER include/spdk/sock.h 00:04:02.871 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.871 TEST_HEADER include/spdk/stdinc.h 00:04:02.871 TEST_HEADER include/spdk/string.h 00:04:02.871 TEST_HEADER include/spdk/thread.h 00:04:02.871 TEST_HEADER include/spdk/trace.h 00:04:02.871 TEST_HEADER include/spdk/trace_parser.h 00:04:02.871 TEST_HEADER include/spdk/tree.h 00:04:02.871 TEST_HEADER include/spdk/ublk.h 00:04:02.871 TEST_HEADER include/spdk/util.h 00:04:02.871 TEST_HEADER include/spdk/uuid.h 00:04:02.871 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.871 TEST_HEADER include/spdk/version.h 00:04:02.871 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.871 TEST_HEADER include/spdk/vmd.h 00:04:02.871 TEST_HEADER include/spdk/vhost.h 00:04:02.871 TEST_HEADER include/spdk/zipf.h 00:04:02.871 TEST_HEADER include/spdk/xor.h 00:04:02.871 CXX test/cpp_headers/accel.o 00:04:02.871 CXX test/cpp_headers/accel_module.o 00:04:02.871 CXX test/cpp_headers/assert.o 00:04:02.871 CXX test/cpp_headers/bdev.o 00:04:02.871 CXX test/cpp_headers/barrier.o 00:04:02.871 CXX test/cpp_headers/base64.o 00:04:02.871 CXX test/cpp_headers/blob_bdev.o 00:04:02.871 CXX test/cpp_headers/bdev_zone.o 00:04:02.871 CXX test/cpp_headers/bit_array.o 00:04:02.871 CXX test/cpp_headers/bdev_module.o 00:04:02.871 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.871 CXX test/cpp_headers/bit_pool.o 00:04:02.871 CXX test/cpp_headers/blobfs.o 00:04:02.871 CXX test/cpp_headers/conf.o 00:04:02.871 CXX test/cpp_headers/blob.o 00:04:02.871 CXX test/cpp_headers/cpuset.o 00:04:02.871 CXX test/cpp_headers/crc16.o 00:04:02.871 CXX test/cpp_headers/crc32.o 00:04:02.871 CXX test/cpp_headers/config.o 00:04:02.871 CXX test/cpp_headers/crc64.o 00:04:02.871 CXX test/cpp_headers/dif.o 00:04:02.871 CXX test/cpp_headers/dma.o 00:04:02.871 CXX test/cpp_headers/env_dpdk.o 00:04:02.871 CXX test/cpp_headers/env.o 00:04:02.871 CXX test/cpp_headers/endian.o 00:04:02.871 CXX test/cpp_headers/event.o 00:04:02.871 CXX test/cpp_headers/fd.o 00:04:02.871 CXX test/cpp_headers/fd_group.o 00:04:02.871 CXX test/cpp_headers/file.o 00:04:02.871 CXX test/cpp_headers/gpt_spec.o 00:04:02.871 CXX test/cpp_headers/ftl.o 00:04:02.871 CXX test/cpp_headers/hexlify.o 00:04:02.871 CXX test/cpp_headers/histogram_data.o 00:04:02.871 CXX test/cpp_headers/idxd_spec.o 00:04:02.871 CXX test/cpp_headers/init.o 00:04:02.871 CXX test/cpp_headers/idxd.o 00:04:02.871 CXX test/cpp_headers/ioat_spec.o 00:04:02.871 CXX test/cpp_headers/iscsi_spec.o 00:04:02.871 CXX test/cpp_headers/ioat.o 00:04:02.871 CXX test/cpp_headers/json.o 00:04:02.871 CXX test/cpp_headers/keyring.o 00:04:02.871 CXX test/cpp_headers/jsonrpc.o 00:04:03.139 CXX test/cpp_headers/keyring_module.o 00:04:03.139 CXX test/cpp_headers/likely.o 00:04:03.139 CXX test/cpp_headers/log.o 00:04:03.139 CC examples/util/zipf/zipf.o 00:04:03.139 CXX test/cpp_headers/mmio.o 00:04:03.139 CXX test/cpp_headers/nbd.o 00:04:03.139 CXX test/cpp_headers/lvol.o 00:04:03.139 CXX test/cpp_headers/memory.o 00:04:03.139 CXX test/cpp_headers/nvme.o 00:04:03.139 CXX test/cpp_headers/notify.o 00:04:03.139 CC examples/blob/hello_world/hello_blob.o 00:04:03.139 CC examples/sock/hello_world/hello_sock.o 00:04:03.139 CXX test/cpp_headers/nvme_intel.o 00:04:03.139 CC examples/ioat/verify/verify.o 00:04:03.139 CC examples/blob/cli/blobcli.o 00:04:03.139 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.139 CC examples/accel/perf/accel_perf.o 00:04:03.139 CC test/app/histogram_perf/histogram_perf.o 00:04:03.139 CC examples/ioat/perf/perf.o 00:04:03.139 CC examples/vmd/led/led.o 00:04:03.139 CC examples/nvme/hello_world/hello_world.o 00:04:03.139 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.139 LINK spdk_lspci 00:04:03.140 CC test/nvme/err_injection/err_injection.o 00:04:03.140 CC test/nvme/aer/aer.o 00:04:03.140 CC test/nvme/boot_partition/boot_partition.o 00:04:03.140 CC test/nvme/sgl/sgl.o 00:04:03.140 CC test/nvme/overhead/overhead.o 00:04:03.140 CC examples/nvme/reconnect/reconnect.o 00:04:03.140 CC examples/thread/thread/thread_ex.o 00:04:03.140 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.140 CC test/nvme/reset/reset.o 00:04:03.140 CC test/app/jsoncat/jsoncat.o 00:04:03.140 CC examples/nvme/hotplug/hotplug.o 00:04:03.140 CC examples/nvme/arbitration/arbitration.o 00:04:03.140 CC test/nvme/compliance/nvme_compliance.o 00:04:03.140 CXX test/cpp_headers/nvme_spec.o 00:04:03.140 CC test/thread/poller_perf/poller_perf.o 00:04:03.140 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.140 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.140 CC test/app/bdev_svc/bdev_svc.o 00:04:03.140 CC test/event/reactor/reactor.o 00:04:03.140 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.140 CC examples/bdev/hello_world/hello_bdev.o 00:04:03.140 CC test/nvme/simple_copy/simple_copy.o 00:04:03.140 CC test/app/stub/stub.o 00:04:03.140 CC test/nvme/fdp/fdp.o 00:04:03.140 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.140 CC test/env/pci/pci_ut.o 00:04:03.140 CC test/event/app_repeat/app_repeat.o 00:04:03.140 CC test/nvme/connect_stress/connect_stress.o 00:04:03.140 CC test/nvme/cuse/cuse.o 00:04:03.140 CC test/event/event_perf/event_perf.o 00:04:03.140 CC examples/nvme/abort/abort.o 00:04:03.140 CC app/fio/nvme/fio_plugin.o 00:04:03.140 CC test/nvme/reserve/reserve.o 00:04:03.140 CC examples/bdev/bdevperf/bdevperf.o 00:04:03.140 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.140 CC test/bdev/bdevio/bdevio.o 00:04:03.140 CC test/env/memory/memory_ut.o 00:04:03.140 CC test/event/reactor_perf/reactor_perf.o 00:04:03.140 CC examples/idxd/perf/perf.o 00:04:03.140 CC test/env/vtophys/vtophys.o 00:04:03.140 CC test/nvme/e2edp/nvme_dp.o 00:04:03.140 CC test/nvme/startup/startup.o 00:04:03.140 CC test/dma/test_dma/test_dma.o 00:04:03.140 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.140 CC test/event/scheduler/scheduler.o 00:04:03.140 CC app/fio/bdev/fio_plugin.o 00:04:03.409 CC examples/nvmf/nvmf/nvmf.o 00:04:03.409 CC test/accel/dif/dif.o 00:04:03.409 CC test/blobfs/mkfs/mkfs.o 00:04:03.409 LINK nvmf_tgt 00:04:03.409 LINK iscsi_tgt 00:04:03.409 CC test/lvol/esnap/esnap.o 00:04:03.678 LINK spdk_trace_record 00:04:03.678 CXX test/cpp_headers/nvme_zns.o 00:04:03.678 LINK spdk_nvme_discover 00:04:03.678 LINK rpc_client_test 00:04:03.678 LINK spdk_tgt 00:04:03.678 LINK interrupt_tgt 00:04:03.678 LINK led 00:04:03.678 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.678 LINK vhost 00:04:03.678 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.948 LINK boot_partition 00:04:03.948 LINK pmr_persistence 00:04:03.948 LINK fused_ordering 00:04:03.948 LINK bdev_svc 00:04:03.948 LINK poller_perf 00:04:03.948 LINK hello_blob 00:04:03.948 LINK cmb_copy 00:04:03.948 LINK app_repeat 00:04:03.948 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.948 LINK zipf 00:04:03.948 LINK hello_world 00:04:03.948 LINK connect_stress 00:04:03.948 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.948 LINK hello_sock 00:04:03.948 LINK histogram_perf 00:04:03.948 LINK lsvmd 00:04:03.948 LINK thread 00:04:03.948 CXX test/cpp_headers/nvmf.o 00:04:03.948 CXX test/cpp_headers/nvmf_spec.o 00:04:03.948 LINK stub 00:04:03.948 LINK reactor 00:04:03.948 LINK simple_copy 00:04:03.948 LINK jsoncat 00:04:03.948 CXX test/cpp_headers/opal_spec.o 00:04:03.948 CXX test/cpp_headers/opal.o 00:04:03.948 CXX test/cpp_headers/pci_ids.o 00:04:03.948 CXX test/cpp_headers/nvmf_transport.o 00:04:03.948 LINK reactor_perf 00:04:03.948 CXX test/cpp_headers/queue.o 00:04:03.948 LINK overhead 00:04:03.948 CXX test/cpp_headers/reduce.o 00:04:03.948 CXX test/cpp_headers/pipe.o 00:04:03.948 CXX test/cpp_headers/rpc.o 00:04:03.948 CXX test/cpp_headers/scheduler.o 00:04:03.948 LINK aer 00:04:03.949 CXX test/cpp_headers/scsi.o 00:04:03.949 CXX test/cpp_headers/scsi_spec.o 00:04:03.949 CXX test/cpp_headers/sock.o 00:04:03.949 LINK reset 00:04:03.949 CXX test/cpp_headers/stdinc.o 00:04:03.949 LINK event_perf 00:04:03.949 LINK err_injection 00:04:03.949 CXX test/cpp_headers/thread.o 00:04:03.949 CXX test/cpp_headers/string.o 00:04:03.949 CXX test/cpp_headers/trace.o 00:04:03.949 LINK ioat_perf 00:04:04.209 CXX test/cpp_headers/trace_parser.o 00:04:04.209 LINK verify 00:04:04.209 CXX test/cpp_headers/ublk.o 00:04:04.209 CXX test/cpp_headers/tree.o 00:04:04.209 CXX test/cpp_headers/util.o 00:04:04.209 CXX test/cpp_headers/version.o 00:04:04.209 CXX test/cpp_headers/uuid.o 00:04:04.209 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.209 CXX test/cpp_headers/vhost.o 00:04:04.209 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.209 CXX test/cpp_headers/vmd.o 00:04:04.209 CXX test/cpp_headers/xor.o 00:04:04.209 CXX test/cpp_headers/zipf.o 00:04:04.209 LINK vtophys 00:04:04.209 LINK fdp 00:04:04.209 LINK reconnect 00:04:04.209 LINK mkfs 00:04:04.209 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.209 LINK spdk_trace 00:04:04.209 LINK env_dpdk_post_init 00:04:04.209 LINK spdk_dd 00:04:04.209 LINK scheduler 00:04:04.209 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.209 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.209 LINK doorbell_aers 00:04:04.209 LINK test_dma 00:04:04.209 LINK idxd_perf 00:04:04.209 LINK nvmf 00:04:04.209 LINK sgl 00:04:04.209 LINK startup 00:04:04.466 LINK hotplug 00:04:04.466 LINK reserve 00:04:04.466 LINK blobcli 00:04:04.466 LINK pci_ut 00:04:04.466 LINK hello_bdev 00:04:04.466 LINK nvme_manage 00:04:04.466 LINK arbitration 00:04:04.466 LINK nvme_compliance 00:04:04.466 LINK nvme_dp 00:04:04.466 LINK spdk_nvme 00:04:04.466 LINK accel_perf 00:04:04.466 LINK abort 00:04:04.466 LINK spdk_nvme_perf 00:04:04.466 LINK mem_callbacks 00:04:04.724 LINK bdevio 00:04:04.724 LINK dif 00:04:04.724 LINK spdk_top 00:04:04.724 LINK spdk_bdev 00:04:04.724 LINK vhost_fuzz 00:04:04.724 LINK nvme_fuzz 00:04:04.981 LINK spdk_nvme_identify 00:04:04.981 LINK memory_ut 00:04:04.981 LINK bdevperf 00:04:04.981 LINK cuse 00:04:05.546 LINK iscsi_fuzz 00:04:07.443 LINK esnap 00:04:07.700 00:04:07.700 real 0m38.890s 00:04:07.700 user 6m1.037s 00:04:07.700 sys 5m20.259s 00:04:07.700 00:40:54 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:07.700 00:40:54 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.700 ************************************ 00:04:07.700 END TEST make 00:04:07.700 ************************************ 00:04:07.700 00:40:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.700 00:40:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.700 00:40:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.700 00:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.700 00:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.700 00:40:54 -- pm/common@44 -- $ pid=3162422 00:04:07.700 00:40:54 -- pm/common@50 -- $ kill -TERM 3162422 00:04:07.700 00:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.700 00:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.700 00:40:54 -- pm/common@44 -- $ pid=3162424 00:04:07.700 00:40:54 -- pm/common@50 -- $ kill -TERM 3162424 00:04:07.700 00:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.700 00:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:07.700 00:40:54 -- pm/common@44 -- $ pid=3162426 00:04:07.700 00:40:54 -- pm/common@50 -- $ kill -TERM 3162426 00:04:07.700 00:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.700 00:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:07.700 00:40:54 -- pm/common@44 -- $ pid=3162454 00:04:07.700 00:40:54 -- pm/common@50 -- $ sudo -E kill -TERM 3162454 00:04:07.959 00:40:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.959 00:40:54 -- nvmf/common.sh@7 -- # uname -s 00:04:07.959 00:40:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.959 00:40:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.959 00:40:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.959 00:40:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.959 00:40:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.959 00:40:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.959 00:40:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.959 00:40:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.959 00:40:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.959 00:40:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.959 00:40:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:07.959 00:40:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:07.959 00:40:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.959 00:40:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.959 00:40:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.959 00:40:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.959 00:40:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:07.959 00:40:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.959 00:40:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.959 00:40:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.959 00:40:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.959 00:40:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.959 00:40:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.959 00:40:54 -- paths/export.sh@5 -- # export PATH 00:04:07.959 00:40:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.959 00:40:54 -- nvmf/common.sh@47 -- # : 0 00:04:07.959 00:40:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:07.959 00:40:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:07.959 00:40:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.959 00:40:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.959 00:40:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.959 00:40:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:07.959 00:40:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:07.959 00:40:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:07.959 00:40:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.959 00:40:54 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.959 00:40:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.959 00:40:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.959 00:40:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:04:07.959 00:40:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.959 00:40:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:04:07.959 00:40:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.959 00:40:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.959 00:40:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.959 00:40:54 -- spdk/autotest.sh@48 -- # udevadm_pid=3221489 00:04:07.959 00:40:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.959 00:40:54 -- pm/common@17 -- # local monitor 00:04:07.959 00:40:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.959 00:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.959 00:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.959 00:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.959 00:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.959 00:40:54 -- pm/common@25 -- # sleep 1 00:04:07.959 00:40:54 -- pm/common@21 -- # date +%s 00:04:07.959 00:40:54 -- pm/common@21 -- # date +%s 00:04:07.959 00:40:54 -- pm/common@21 -- # date +%s 00:04:07.959 00:40:54 -- pm/common@21 -- # date +%s 00:04:07.959 00:40:54 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726454 00:04:07.959 00:40:54 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726454 00:04:07.959 00:40:54 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726454 00:04:07.959 00:40:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726454 00:04:07.959 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726454_collect-cpu-load.pm.log 00:04:07.959 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726454_collect-vmstat.pm.log 00:04:07.959 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726454_collect-cpu-temp.pm.log 00:04:07.959 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726454_collect-bmc-pm.bmc.pm.log 00:04:08.890 00:40:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.890 00:40:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.890 00:40:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:08.890 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:08.890 00:40:55 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.890 00:40:55 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:08.890 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:08.891 00:40:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:04:08.891 00:40:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:08.891 00:40:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:08.891 00:40:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:04:08.891 00:40:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:08.891 00:40:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.891 00:40:55 -- common/autotest_common.sh@1451 -- # uname 00:04:08.891 00:40:55 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:08.891 00:40:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.891 00:40:55 -- common/autotest_common.sh@1471 -- # uname 00:04:08.891 00:40:55 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:08.891 00:40:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:08.891 00:40:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:08.891 00:40:55 -- spdk/autotest.sh@72 -- # hash lcov 00:04:08.891 00:40:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:08.891 00:40:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:08.891 --rc lcov_branch_coverage=1 00:04:08.891 --rc lcov_function_coverage=1 00:04:08.891 --rc genhtml_branch_coverage=1 00:04:08.891 --rc genhtml_function_coverage=1 00:04:08.891 --rc genhtml_legend=1 00:04:08.891 --rc geninfo_all_blocks=1 00:04:08.891 ' 00:04:08.891 00:40:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:08.891 --rc lcov_branch_coverage=1 00:04:08.891 --rc lcov_function_coverage=1 00:04:08.891 --rc genhtml_branch_coverage=1 00:04:08.891 --rc genhtml_function_coverage=1 00:04:08.891 --rc genhtml_legend=1 00:04:08.891 --rc geninfo_all_blocks=1 00:04:08.891 ' 00:04:08.891 00:40:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:08.891 --rc lcov_branch_coverage=1 00:04:08.891 --rc lcov_function_coverage=1 00:04:08.891 --rc genhtml_branch_coverage=1 00:04:08.891 --rc genhtml_function_coverage=1 00:04:08.891 --rc genhtml_legend=1 00:04:08.891 --rc geninfo_all_blocks=1 00:04:08.891 --no-external' 00:04:08.891 00:40:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:08.891 --rc lcov_branch_coverage=1 00:04:08.891 --rc lcov_function_coverage=1 00:04:08.891 --rc genhtml_branch_coverage=1 00:04:08.891 --rc genhtml_function_coverage=1 00:04:08.891 --rc genhtml_legend=1 00:04:08.891 --rc geninfo_all_blocks=1 00:04:08.891 --no-external' 00:04:08.891 00:40:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:08.891 lcov: LCOV version 1.14 00:04:08.891 00:40:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:04:15.439 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:15.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.439 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:15.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:15.439 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:15.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:15.439 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:15.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:23.563 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:23.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:23.564 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:23.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:23.565 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:23.565 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:24.499 00:41:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:24.499 00:41:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.499 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:04:24.499 00:41:11 -- spdk/autotest.sh@91 -- # rm -f 00:04:24.499 00:41:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.022 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:04:27.022 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.022 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.022 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.022 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:04:27.022 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.022 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.280 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.280 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.280 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:04:27.280 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:04:27.280 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:04:27.539 00:41:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:27.539 00:41:14 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:27.539 00:41:14 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:27.539 00:41:14 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:27.539 00:41:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:27.539 00:41:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:27.539 00:41:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:27.539 00:41:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.539 00:41:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:27.539 00:41:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:27.539 00:41:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:27.539 00:41:14 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:27.539 00:41:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.539 00:41:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:27.539 00:41:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:27.539 00:41:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.539 00:41:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.539 00:41:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:27.539 00:41:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:27.539 00:41:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.539 No valid GPT data, bailing 00:04:27.539 00:41:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.539 00:41:14 -- scripts/common.sh@391 -- # pt= 00:04:27.539 00:41:14 -- scripts/common.sh@392 -- # return 1 00:04:27.539 00:41:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.539 1+0 records in 00:04:27.539 1+0 records out 00:04:27.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00137359 s, 763 MB/s 00:04:27.539 00:41:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.539 00:41:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.539 00:41:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:27.539 00:41:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:27.539 00:41:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:27.539 No valid GPT data, bailing 00:04:27.539 00:41:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:27.539 00:41:14 -- scripts/common.sh@391 -- # pt= 00:04:27.539 00:41:14 -- scripts/common.sh@392 -- # return 1 00:04:27.539 00:41:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:27.539 1+0 records in 00:04:27.539 1+0 records out 00:04:27.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00295954 s, 354 MB/s 00:04:27.539 00:41:14 -- spdk/autotest.sh@118 -- # sync 00:04:27.539 00:41:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:27.539 00:41:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:27.539 00:41:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.857 00:41:19 -- spdk/autotest.sh@124 -- # uname -s 00:04:32.857 00:41:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:32.857 00:41:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:04:32.857 00:41:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.857 00:41:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.857 00:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 ************************************ 00:04:32.857 START TEST setup.sh 00:04:32.857 ************************************ 00:04:32.857 00:41:19 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:04:32.857 * Looking for test storage... 00:04:32.857 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:04:32.857 00:41:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:32.857 00:41:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:32.857 00:41:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:04:32.857 00:41:19 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.857 00:41:19 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.857 00:41:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 ************************************ 00:04:32.857 START TEST acl 00:04:32.857 ************************************ 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:04:32.857 * Looking for test storage... 00:04:32.857 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:32.857 00:41:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:32.857 00:41:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:32.857 00:41:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.857 00:41:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.139 00:41:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:36.139 00:41:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:36.139 00:41:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.139 00:41:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:36.139 00:41:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.139 00:41:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:04:38.664 Hugepages 00:04:38.664 node hugesize free / total 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:04:38.664 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:03:00.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.664 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:38.922 00:41:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:38.922 00:41:25 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.922 00:41:25 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.922 00:41:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:38.922 ************************************ 00:04:38.922 START TEST denied 00:04:38.922 ************************************ 00:04:38.922 00:41:25 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:38.922 00:41:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:03:00.0' 00:04:38.922 00:41:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:38.922 00:41:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:03:00.0' 00:04:38.922 00:41:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.922 00:41:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:43.098 0000:03:00.0 (1344 51c3): Skipping denied controller at 0000:03:00.0 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:03:00.0 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:03:00.0 ]] 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:03:00.0/driver 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.098 00:41:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.283 00:04:47.283 real 0m7.905s 00:04:47.283 user 0m1.913s 00:04:47.283 sys 0m3.864s 00:04:47.283 00:41:33 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.283 00:41:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:47.283 ************************************ 00:04:47.283 END TEST denied 00:04:47.283 ************************************ 00:04:47.283 00:41:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:47.283 00:41:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.283 00:41:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.283 00:41:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.283 ************************************ 00:04:47.283 START TEST allowed 00:04:47.283 ************************************ 00:04:47.283 00:41:33 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:47.283 00:41:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:03:00.0 00:04:47.283 00:41:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:03:00.0 .*: nvme -> .*' 00:04:47.283 00:41:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:47.283 00:41:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.283 00:41:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:50.562 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:c9:00.0 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.562 00:41:37 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.839 00:04:53.839 real 0m6.755s 00:04:53.839 user 0m1.800s 00:04:53.839 sys 0m3.798s 00:04:53.839 00:41:40 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.839 00:41:40 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:53.839 ************************************ 00:04:53.839 END TEST allowed 00:04:53.839 ************************************ 00:04:53.839 00:04:53.839 real 0m21.103s 00:04:53.839 user 0m5.866s 00:04:53.839 sys 0m11.756s 00:04:53.839 00:41:40 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.839 00:41:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:53.839 ************************************ 00:04:53.839 END TEST acl 00:04:53.839 ************************************ 00:04:53.839 00:41:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:04:53.839 00:41:40 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.839 00:41:40 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.839 00:41:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.839 ************************************ 00:04:53.839 START TEST hugepages 00:04:53.839 ************************************ 00:04:53.839 00:41:40 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:04:53.839 * Looking for test storage... 00:04:53.839 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.839 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 109138316 kB' 'MemAvailable: 112427288 kB' 'Buffers: 2696 kB' 'Cached: 9388524 kB' 'SwapCached: 0 kB' 'Active: 6445888 kB' 'Inactive: 3418944 kB' 'Active(anon): 5880236 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483052 kB' 'Mapped: 194804 kB' 'Shmem: 5406624 kB' 'KReclaimable: 258012 kB' 'Slab: 840020 kB' 'SReclaimable: 258012 kB' 'SUnreclaim: 582008 kB' 'KernelStack: 24768 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69510436 kB' 'Committed_AS: 7343488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228592 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.840 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:53.841 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:53.842 00:41:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:53.842 00:41:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.842 00:41:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.842 00:41:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.842 ************************************ 00:04:53.842 START TEST default_setup 00:04:53.842 ************************************ 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.842 00:41:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:56.368 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.368 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.368 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.368 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.368 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.368 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.368 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.368 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.628 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.628 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.628 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.628 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.628 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.628 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:56.628 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:56.628 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:57.196 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:04:57.456 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111397048 kB' 'MemAvailable: 114685500 kB' 'Buffers: 2696 kB' 'Cached: 9388764 kB' 'SwapCached: 0 kB' 'Active: 6470416 kB' 'Inactive: 3418944 kB' 'Active(anon): 5904764 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507472 kB' 'Mapped: 195392 kB' 'Shmem: 5406864 kB' 'KReclaimable: 256972 kB' 'Slab: 832456 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 575484 kB' 'KernelStack: 24704 kB' 'PageTables: 9832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7409620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228528 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.720 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.721 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111399552 kB' 'MemAvailable: 114688004 kB' 'Buffers: 2696 kB' 'Cached: 9388764 kB' 'SwapCached: 0 kB' 'Active: 6471804 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906152 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508544 kB' 'Mapped: 195836 kB' 'Shmem: 5406864 kB' 'KReclaimable: 256972 kB' 'Slab: 832340 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 575368 kB' 'KernelStack: 24512 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7413268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228480 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.722 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.723 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111393324 kB' 'MemAvailable: 114681776 kB' 'Buffers: 2696 kB' 'Cached: 9388780 kB' 'SwapCached: 0 kB' 'Active: 6475348 kB' 'Inactive: 3418944 kB' 'Active(anon): 5909696 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511952 kB' 'Mapped: 195728 kB' 'Shmem: 5406880 kB' 'KReclaimable: 256972 kB' 'Slab: 832188 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 575216 kB' 'KernelStack: 24672 kB' 'PageTables: 10076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7416456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228576 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.724 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.725 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.726 nr_hugepages=1024 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.726 resv_hugepages=0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.726 surplus_hugepages=0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.726 anon_hugepages=0 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111393440 kB' 'MemAvailable: 114681892 kB' 'Buffers: 2696 kB' 'Cached: 9388808 kB' 'SwapCached: 0 kB' 'Active: 6476408 kB' 'Inactive: 3418944 kB' 'Active(anon): 5910756 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513028 kB' 'Mapped: 196004 kB' 'Shmem: 5406908 kB' 'KReclaimable: 256972 kB' 'Slab: 832188 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 575216 kB' 'KernelStack: 24784 kB' 'PageTables: 10120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7417548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228532 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.726 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.727 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 59847108 kB' 'MemUsed: 5908872 kB' 'SwapCached: 0 kB' 'Active: 1799864 kB' 'Inactive: 71096 kB' 'Active(anon): 1712532 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522008 kB' 'Mapped: 68808 kB' 'AnonPages: 358036 kB' 'Shmem: 1363580 kB' 'KernelStack: 14520 kB' 'PageTables: 6812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127456 kB' 'Slab: 448028 kB' 'SReclaimable: 127456 kB' 'SUnreclaim: 320572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.728 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.729 node0=1024 expecting 1024 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.729 00:04:57.729 real 0m3.886s 00:04:57.729 user 0m0.892s 00:04:57.729 sys 0m1.688s 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.729 00:41:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 ************************************ 00:04:57.729 END TEST default_setup 00:04:57.729 ************************************ 00:04:57.729 00:41:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:57.729 00:41:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.729 00:41:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.729 00:41:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 ************************************ 00:04:57.729 START TEST per_node_1G_alloc 00:04:57.729 ************************************ 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:57.729 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.730 00:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:01.024 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.024 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:01.024 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.024 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.024 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.024 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.024 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.024 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.025 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.025 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.025 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:01.025 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:01.025 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111407840 kB' 'MemAvailable: 114696292 kB' 'Buffers: 2696 kB' 'Cached: 9388920 kB' 'SwapCached: 0 kB' 'Active: 6471976 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906324 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508404 kB' 'Mapped: 195380 kB' 'Shmem: 5407020 kB' 'KReclaimable: 256972 kB' 'Slab: 834176 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 577204 kB' 'KernelStack: 24608 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7410296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228576 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111407988 kB' 'MemAvailable: 114696440 kB' 'Buffers: 2696 kB' 'Cached: 9388924 kB' 'SwapCached: 0 kB' 'Active: 6471668 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906016 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508672 kB' 'Mapped: 195340 kB' 'Shmem: 5407024 kB' 'KReclaimable: 256972 kB' 'Slab: 834176 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 577204 kB' 'KernelStack: 24768 kB' 'PageTables: 9896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7413120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228592 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111406564 kB' 'MemAvailable: 114695016 kB' 'Buffers: 2696 kB' 'Cached: 9388924 kB' 'SwapCached: 0 kB' 'Active: 6472044 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906392 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508536 kB' 'Mapped: 195236 kB' 'Shmem: 5407024 kB' 'KReclaimable: 256972 kB' 'Slab: 834176 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 577204 kB' 'KernelStack: 24864 kB' 'PageTables: 9912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7410336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228656 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.028 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.029 nr_hugepages=1024 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.029 resv_hugepages=0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.029 surplus_hugepages=0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.029 anon_hugepages=0 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111405180 kB' 'MemAvailable: 114693632 kB' 'Buffers: 2696 kB' 'Cached: 9388964 kB' 'SwapCached: 0 kB' 'Active: 6471744 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906092 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508180 kB' 'Mapped: 195244 kB' 'Shmem: 5407064 kB' 'KReclaimable: 256972 kB' 'Slab: 834692 kB' 'SReclaimable: 256972 kB' 'SUnreclaim: 577720 kB' 'KernelStack: 24784 kB' 'PageTables: 9900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7411088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228592 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.029 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.030 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60913752 kB' 'MemUsed: 4842228 kB' 'SwapCached: 0 kB' 'Active: 1800388 kB' 'Inactive: 71096 kB' 'Active(anon): 1713056 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522008 kB' 'Mapped: 68668 kB' 'AnonPages: 358476 kB' 'Shmem: 1363580 kB' 'KernelStack: 14728 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127456 kB' 'Slab: 449844 kB' 'SReclaimable: 127456 kB' 'SUnreclaim: 322388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.031 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 50490400 kB' 'MemUsed: 10191592 kB' 'SwapCached: 0 kB' 'Active: 4677140 kB' 'Inactive: 3347848 kB' 'Active(anon): 4198820 kB' 'Inactive(anon): 0 kB' 'Active(file): 478320 kB' 'Inactive(file): 3347848 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869692 kB' 'Mapped: 127064 kB' 'AnonPages: 155532 kB' 'Shmem: 4043524 kB' 'KernelStack: 10024 kB' 'PageTables: 2660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129516 kB' 'Slab: 384848 kB' 'SReclaimable: 129516 kB' 'SUnreclaim: 255332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.032 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.033 node0=512 expecting 512 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:01.033 node1=512 expecting 512 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.033 00:05:01.033 real 0m3.134s 00:05:01.033 user 0m1.083s 00:05:01.033 sys 0m1.834s 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.033 00:41:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.033 ************************************ 00:05:01.033 END TEST per_node_1G_alloc 00:05:01.033 ************************************ 00:05:01.033 00:41:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.033 00:41:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.033 00:41:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.033 00:41:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.033 ************************************ 00:05:01.033 START TEST even_2G_alloc 00:05:01.033 ************************************ 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.033 00:41:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:03.610 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.610 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:03.610 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.610 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:03.867 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:03.867 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111432612 kB' 'MemAvailable: 114720996 kB' 'Buffers: 2696 kB' 'Cached: 9389076 kB' 'SwapCached: 0 kB' 'Active: 6460716 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895064 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496544 kB' 'Mapped: 193928 kB' 'Shmem: 5407176 kB' 'KReclaimable: 256836 kB' 'Slab: 832912 kB' 'SReclaimable: 256836 kB' 'SUnreclaim: 576076 kB' 'KernelStack: 24544 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7349432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228352 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.132 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.133 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111435096 kB' 'MemAvailable: 114723480 kB' 'Buffers: 2696 kB' 'Cached: 9389080 kB' 'SwapCached: 0 kB' 'Active: 6460872 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895220 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496708 kB' 'Mapped: 193936 kB' 'Shmem: 5407180 kB' 'KReclaimable: 256836 kB' 'Slab: 832876 kB' 'SReclaimable: 256836 kB' 'SUnreclaim: 576040 kB' 'KernelStack: 24576 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7349452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.134 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111436876 kB' 'MemAvailable: 114725260 kB' 'Buffers: 2696 kB' 'Cached: 9389096 kB' 'SwapCached: 0 kB' 'Active: 6458620 kB' 'Inactive: 3418944 kB' 'Active(anon): 5892968 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494900 kB' 'Mapped: 193788 kB' 'Shmem: 5407196 kB' 'KReclaimable: 256836 kB' 'Slab: 833048 kB' 'SReclaimable: 256836 kB' 'SUnreclaim: 576212 kB' 'KernelStack: 24320 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7346800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228176 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.135 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.136 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.137 00:41:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.137 nr_hugepages=1024 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.137 resv_hugepages=0 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.137 surplus_hugepages=0 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.137 anon_hugepages=0 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.137 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111438120 kB' 'MemAvailable: 114726504 kB' 'Buffers: 2696 kB' 'Cached: 9389120 kB' 'SwapCached: 0 kB' 'Active: 6458604 kB' 'Inactive: 3418944 kB' 'Active(anon): 5892952 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494936 kB' 'Mapped: 193788 kB' 'Shmem: 5407220 kB' 'KReclaimable: 256836 kB' 'Slab: 833048 kB' 'SReclaimable: 256836 kB' 'SUnreclaim: 576212 kB' 'KernelStack: 24352 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7346456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.138 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60945984 kB' 'MemUsed: 4809996 kB' 'SwapCached: 0 kB' 'Active: 1790504 kB' 'Inactive: 71096 kB' 'Active(anon): 1703172 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522008 kB' 'Mapped: 67336 kB' 'AnonPages: 348692 kB' 'Shmem: 1363580 kB' 'KernelStack: 14168 kB' 'PageTables: 5112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127264 kB' 'Slab: 448524 kB' 'SReclaimable: 127264 kB' 'SUnreclaim: 321260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.139 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.140 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 50492300 kB' 'MemUsed: 10189692 kB' 'SwapCached: 0 kB' 'Active: 4667648 kB' 'Inactive: 3347848 kB' 'Active(anon): 4189328 kB' 'Inactive(anon): 0 kB' 'Active(file): 478320 kB' 'Inactive(file): 3347848 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869848 kB' 'Mapped: 126452 kB' 'AnonPages: 145748 kB' 'Shmem: 4043680 kB' 'KernelStack: 10120 kB' 'PageTables: 2860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129508 kB' 'Slab: 384452 kB' 'SReclaimable: 129508 kB' 'SUnreclaim: 254944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.141 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.142 node0=512 expecting 512 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.142 node1=512 expecting 512 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.142 00:05:04.142 real 0m3.098s 00:05:04.142 user 0m1.044s 00:05:04.142 sys 0m1.835s 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.142 00:41:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 END TEST even_2G_alloc 00:05:04.142 ************************************ 00:05:04.142 00:41:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:04.142 00:41:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.142 00:41:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.142 00:41:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 START TEST odd_alloc 00:05:04.142 ************************************ 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.142 00:41:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:07.436 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:07.436 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:07.436 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:07.436 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111425420 kB' 'MemAvailable: 114713740 kB' 'Buffers: 2696 kB' 'Cached: 9389244 kB' 'SwapCached: 0 kB' 'Active: 6459120 kB' 'Inactive: 3418944 kB' 'Active(anon): 5893468 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495316 kB' 'Mapped: 193812 kB' 'Shmem: 5407344 kB' 'KReclaimable: 256708 kB' 'Slab: 833000 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576292 kB' 'KernelStack: 24256 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7347568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228192 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.436 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.437 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111426088 kB' 'MemAvailable: 114714408 kB' 'Buffers: 2696 kB' 'Cached: 9389248 kB' 'SwapCached: 0 kB' 'Active: 6458756 kB' 'Inactive: 3418944 kB' 'Active(anon): 5893104 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495004 kB' 'Mapped: 193800 kB' 'Shmem: 5407348 kB' 'KReclaimable: 256708 kB' 'Slab: 833000 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576292 kB' 'KernelStack: 24256 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7347588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.438 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.439 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111425300 kB' 'MemAvailable: 114713620 kB' 'Buffers: 2696 kB' 'Cached: 9389264 kB' 'SwapCached: 0 kB' 'Active: 6458748 kB' 'Inactive: 3418944 kB' 'Active(anon): 5893096 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494980 kB' 'Mapped: 193800 kB' 'Shmem: 5407364 kB' 'KReclaimable: 256708 kB' 'Slab: 833084 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576376 kB' 'KernelStack: 24288 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7347608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228176 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.440 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.441 nr_hugepages=1025 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.441 resv_hugepages=0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.441 surplus_hugepages=0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.441 anon_hugepages=0 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.441 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111424544 kB' 'MemAvailable: 114712864 kB' 'Buffers: 2696 kB' 'Cached: 9389284 kB' 'SwapCached: 0 kB' 'Active: 6458740 kB' 'Inactive: 3418944 kB' 'Active(anon): 5893088 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494948 kB' 'Mapped: 193800 kB' 'Shmem: 5407384 kB' 'KReclaimable: 256708 kB' 'Slab: 833084 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576376 kB' 'KernelStack: 24272 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557988 kB' 'Committed_AS: 7347628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228176 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.442 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60943660 kB' 'MemUsed: 4812320 kB' 'SwapCached: 0 kB' 'Active: 1792472 kB' 'Inactive: 71096 kB' 'Active(anon): 1705140 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522052 kB' 'Mapped: 67352 kB' 'AnonPages: 350652 kB' 'Shmem: 1363624 kB' 'KernelStack: 14200 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127200 kB' 'Slab: 448804 kB' 'SReclaimable: 127200 kB' 'SUnreclaim: 321604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.443 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.444 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 50481016 kB' 'MemUsed: 10200976 kB' 'SwapCached: 0 kB' 'Active: 4666332 kB' 'Inactive: 3347848 kB' 'Active(anon): 4188012 kB' 'Inactive(anon): 0 kB' 'Active(file): 478320 kB' 'Inactive(file): 3347848 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869972 kB' 'Mapped: 126448 kB' 'AnonPages: 144320 kB' 'Shmem: 4043804 kB' 'KernelStack: 10088 kB' 'PageTables: 2744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129508 kB' 'Slab: 384280 kB' 'SReclaimable: 129508 kB' 'SUnreclaim: 254772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.445 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:07.446 node0=512 expecting 513 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:07.446 node1=513 expecting 512 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:07.446 00:05:07.446 real 0m3.142s 00:05:07.446 user 0m0.998s 00:05:07.446 sys 0m1.930s 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.446 00:41:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.446 ************************************ 00:05:07.446 END TEST odd_alloc 00:05:07.446 ************************************ 00:05:07.446 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.446 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.446 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.446 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.446 ************************************ 00:05:07.446 START TEST custom_alloc 00:05:07.446 ************************************ 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.446 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.447 00:41:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:10.742 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:10.742 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:10.742 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:10.742 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110367704 kB' 'MemAvailable: 113656024 kB' 'Buffers: 2696 kB' 'Cached: 9389416 kB' 'SwapCached: 0 kB' 'Active: 6460748 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895096 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496860 kB' 'Mapped: 193908 kB' 'Shmem: 5407516 kB' 'KReclaimable: 256708 kB' 'Slab: 833456 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576748 kB' 'KernelStack: 24320 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7348396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.742 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.743 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110370132 kB' 'MemAvailable: 113658452 kB' 'Buffers: 2696 kB' 'Cached: 9389424 kB' 'SwapCached: 0 kB' 'Active: 6461296 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895644 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497496 kB' 'Mapped: 193908 kB' 'Shmem: 5407524 kB' 'KReclaimable: 256708 kB' 'Slab: 833448 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576740 kB' 'KernelStack: 24288 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7348416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228256 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.744 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110370232 kB' 'MemAvailable: 113658552 kB' 'Buffers: 2696 kB' 'Cached: 9389436 kB' 'SwapCached: 0 kB' 'Active: 6460796 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895144 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496912 kB' 'Mapped: 193852 kB' 'Shmem: 5407536 kB' 'KReclaimable: 256708 kB' 'Slab: 833424 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576716 kB' 'KernelStack: 24272 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7348436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228272 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.745 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.746 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:10.747 nr_hugepages=1536 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.747 resv_hugepages=0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.747 surplus_hugepages=0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.747 anon_hugepages=0 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 110372524 kB' 'MemAvailable: 113660844 kB' 'Buffers: 2696 kB' 'Cached: 9389460 kB' 'SwapCached: 0 kB' 'Active: 6461036 kB' 'Inactive: 3418944 kB' 'Active(anon): 5895384 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497172 kB' 'Mapped: 193852 kB' 'Shmem: 5407560 kB' 'KReclaimable: 256708 kB' 'Slab: 833424 kB' 'SReclaimable: 256708 kB' 'SUnreclaim: 576716 kB' 'KernelStack: 24304 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034724 kB' 'Committed_AS: 7351160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228272 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.747 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.748 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60943176 kB' 'MemUsed: 4812804 kB' 'SwapCached: 0 kB' 'Active: 1792296 kB' 'Inactive: 71096 kB' 'Active(anon): 1704964 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522084 kB' 'Mapped: 67380 kB' 'AnonPages: 350476 kB' 'Shmem: 1363656 kB' 'KernelStack: 14232 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127200 kB' 'Slab: 448792 kB' 'SReclaimable: 127200 kB' 'SUnreclaim: 321592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.749 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681992 kB' 'MemFree: 49421960 kB' 'MemUsed: 11260032 kB' 'SwapCached: 0 kB' 'Active: 4674424 kB' 'Inactive: 3347848 kB' 'Active(anon): 4196104 kB' 'Inactive(anon): 0 kB' 'Active(file): 478320 kB' 'Inactive(file): 3347848 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7870108 kB' 'Mapped: 127140 kB' 'AnonPages: 152404 kB' 'Shmem: 4043940 kB' 'KernelStack: 10072 kB' 'PageTables: 2648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129508 kB' 'Slab: 384632 kB' 'SReclaimable: 129508 kB' 'SUnreclaim: 255124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.750 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:10.751 node0=512 expecting 512 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:10.751 node1=1024 expecting 1024 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:10.751 00:05:10.751 real 0m3.263s 00:05:10.751 user 0m1.049s 00:05:10.751 sys 0m2.009s 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.751 00:41:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.751 ************************************ 00:05:10.751 END TEST custom_alloc 00:05:10.751 ************************************ 00:05:10.752 00:41:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:10.752 00:41:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.752 00:41:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.752 00:41:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.752 ************************************ 00:05:10.752 START TEST no_shrink_alloc 00:05:10.752 ************************************ 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.752 00:41:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:14.044 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:14.044 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:14.044 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:14.044 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111353152 kB' 'MemAvailable: 114641440 kB' 'Buffers: 2696 kB' 'Cached: 9389584 kB' 'SwapCached: 0 kB' 'Active: 6469188 kB' 'Inactive: 3418944 kB' 'Active(anon): 5903536 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504988 kB' 'Mapped: 194796 kB' 'Shmem: 5407684 kB' 'KReclaimable: 256644 kB' 'Slab: 833340 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576696 kB' 'KernelStack: 24464 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7361896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228356 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.044 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.045 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111347340 kB' 'MemAvailable: 114635628 kB' 'Buffers: 2696 kB' 'Cached: 9389584 kB' 'SwapCached: 0 kB' 'Active: 6467308 kB' 'Inactive: 3418944 kB' 'Active(anon): 5901656 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503192 kB' 'Mapped: 194756 kB' 'Shmem: 5407684 kB' 'KReclaimable: 256644 kB' 'Slab: 833344 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576700 kB' 'KernelStack: 24512 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7360716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228336 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111342088 kB' 'MemAvailable: 114630376 kB' 'Buffers: 2696 kB' 'Cached: 9389604 kB' 'SwapCached: 0 kB' 'Active: 6472308 kB' 'Inactive: 3418944 kB' 'Active(anon): 5906656 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508496 kB' 'Mapped: 194572 kB' 'Shmem: 5407704 kB' 'KReclaimable: 256644 kB' 'Slab: 833368 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576724 kB' 'KernelStack: 24544 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7366296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228372 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.050 nr_hugepages=1024 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.050 resv_hugepages=0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.050 surplus_hugepages=0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.050 anon_hugepages=0 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111338140 kB' 'MemAvailable: 114626428 kB' 'Buffers: 2696 kB' 'Cached: 9389604 kB' 'SwapCached: 0 kB' 'Active: 6468112 kB' 'Inactive: 3418944 kB' 'Active(anon): 5902460 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504392 kB' 'Mapped: 194700 kB' 'Shmem: 5407704 kB' 'KReclaimable: 256644 kB' 'Slab: 833368 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576724 kB' 'KernelStack: 24544 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7359788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228388 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 59881100 kB' 'MemUsed: 5874880 kB' 'SwapCached: 0 kB' 'Active: 1794612 kB' 'Inactive: 71096 kB' 'Active(anon): 1707280 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522184 kB' 'Mapped: 67376 kB' 'AnonPages: 352704 kB' 'Shmem: 1363756 kB' 'KernelStack: 14296 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127168 kB' 'Slab: 448596 kB' 'SReclaimable: 127168 kB' 'SUnreclaim: 321428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.053 node0=1024 expecting 1024 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.053 00:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:17.346 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:05:17.346 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:05:17.346 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:05:17.346 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:05:17.346 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111287224 kB' 'MemAvailable: 114575512 kB' 'Buffers: 2696 kB' 'Cached: 9389724 kB' 'SwapCached: 0 kB' 'Active: 6469652 kB' 'Inactive: 3418944 kB' 'Active(anon): 5904000 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504924 kB' 'Mapped: 194876 kB' 'Shmem: 5407824 kB' 'KReclaimable: 256644 kB' 'Slab: 833508 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576864 kB' 'KernelStack: 24512 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7359048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228420 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.346 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.347 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111287424 kB' 'MemAvailable: 114575712 kB' 'Buffers: 2696 kB' 'Cached: 9389728 kB' 'SwapCached: 0 kB' 'Active: 6469192 kB' 'Inactive: 3418944 kB' 'Active(anon): 5903540 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504944 kB' 'Mapped: 194720 kB' 'Shmem: 5407828 kB' 'KReclaimable: 256644 kB' 'Slab: 833504 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576860 kB' 'KernelStack: 24528 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7359064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228404 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.348 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.349 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111287608 kB' 'MemAvailable: 114575896 kB' 'Buffers: 2696 kB' 'Cached: 9389744 kB' 'SwapCached: 0 kB' 'Active: 6469168 kB' 'Inactive: 3418944 kB' 'Active(anon): 5903516 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504904 kB' 'Mapped: 194720 kB' 'Shmem: 5407844 kB' 'KReclaimable: 256644 kB' 'Slab: 833496 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576852 kB' 'KernelStack: 24544 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7359088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228436 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.350 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.351 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.352 nr_hugepages=1024 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.352 resv_hugepages=0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.352 surplus_hugepages=0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.352 anon_hugepages=0 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437972 kB' 'MemFree: 111287608 kB' 'MemAvailable: 114575896 kB' 'Buffers: 2696 kB' 'Cached: 9389760 kB' 'SwapCached: 0 kB' 'Active: 6468828 kB' 'Inactive: 3418944 kB' 'Active(anon): 5903176 kB' 'Inactive(anon): 0 kB' 'Active(file): 565652 kB' 'Inactive(file): 3418944 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504500 kB' 'Mapped: 194720 kB' 'Shmem: 5407860 kB' 'KReclaimable: 256644 kB' 'Slab: 833496 kB' 'SReclaimable: 256644 kB' 'SUnreclaim: 576852 kB' 'KernelStack: 24528 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559012 kB' 'Committed_AS: 7359108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228436 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2326592 kB' 'DirectMap2M: 17373184 kB' 'DirectMap1G: 116391936 kB' 00:05:17.352 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.352 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.352 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.352 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.352 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.353 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 59871212 kB' 'MemUsed: 5884768 kB' 'SwapCached: 0 kB' 'Active: 1793380 kB' 'Inactive: 71096 kB' 'Active(anon): 1706048 kB' 'Inactive(anon): 0 kB' 'Active(file): 87332 kB' 'Inactive(file): 71096 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1522288 kB' 'Mapped: 67544 kB' 'AnonPages: 351288 kB' 'Shmem: 1363860 kB' 'KernelStack: 14344 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127168 kB' 'Slab: 448560 kB' 'SReclaimable: 127168 kB' 'SUnreclaim: 321392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.354 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.355 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.356 node0=1024 expecting 1024 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.356 00:05:17.356 real 0m6.371s 00:05:17.356 user 0m2.037s 00:05:17.356 sys 0m3.922s 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.356 00:42:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.356 ************************************ 00:05:17.356 END TEST no_shrink_alloc 00:05:17.356 ************************************ 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:17.356 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:17.356 00:05:17.356 real 0m23.448s 00:05:17.356 user 0m7.276s 00:05:17.356 sys 0m13.619s 00:05:17.356 00:42:04 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.356 00:42:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.356 ************************************ 00:05:17.356 END TEST hugepages 00:05:17.356 ************************************ 00:05:17.356 00:42:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:05:17.356 00:42:04 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.356 00:42:04 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.356 00:42:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.356 ************************************ 00:05:17.356 START TEST driver 00:05:17.356 ************************************ 00:05:17.356 00:42:04 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:05:17.356 * Looking for test storage... 00:05:17.356 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:05:17.356 00:42:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:17.356 00:42:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.356 00:42:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.630 00:42:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.630 00:42:08 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.630 00:42:08 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.630 00:42:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.630 ************************************ 00:05:22.630 START TEST guess_driver 00:05:22.630 ************************************ 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:22.630 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:22.630 Looking for driver=vfio-pci 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:22.630 00:42:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.631 00:42:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.170 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.431 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.691 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.691 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.691 00:42:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.261 00:42:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.575 00:05:31.575 real 0m9.015s 00:05:31.575 user 0m2.215s 00:05:31.575 sys 0m4.346s 00:05:31.575 00:42:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.575 00:42:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 ************************************ 00:05:31.575 END TEST guess_driver 00:05:31.575 ************************************ 00:05:31.575 00:05:31.575 real 0m13.633s 00:05:31.575 user 0m3.366s 00:05:31.575 sys 0m6.645s 00:05:31.575 00:42:17 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.575 00:42:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 ************************************ 00:05:31.575 END TEST driver 00:05:31.575 ************************************ 00:05:31.575 00:42:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:05:31.575 00:42:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.575 00:42:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.575 00:42:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 ************************************ 00:05:31.575 START TEST devices 00:05:31.575 ************************************ 00:05:31.575 00:42:17 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:05:31.575 * Looking for test storage... 00:05:31.575 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:05:31.575 00:42:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:31.575 00:42:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:31.575 00:42:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.575 00:42:17 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:34.876 No valid GPT data, bailing 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@80 -- # echo 960197124096 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:03:00.0 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:05:34.876 No valid GPT data, bailing 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:34.876 00:42:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:34.876 00:42:21 setup.sh.devices -- setup/common.sh@80 -- # echo 960197124096 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:03:00.0 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.876 00:42:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:34.876 ************************************ 00:05:34.876 START TEST nvme_mount 00:05:34.876 ************************************ 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:34.876 00:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:35.448 Creating new GPT entries in memory. 00:05:35.448 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.448 other utilities. 00:05:35.448 00:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.448 00:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.448 00:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.448 00:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.448 00:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:36.831 Creating new GPT entries in memory. 00:05:36.831 The operation has completed successfully. 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3257734 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.831 00:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:39.376 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:39.637 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.637 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.898 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:39.898 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:05:39.898 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:39.898 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.898 00:42:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:42.442 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.701 00:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:45.247 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.818 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.818 00:05:45.818 real 0m11.162s 00:05:45.818 user 0m2.824s 00:05:45.818 sys 0m5.482s 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.818 00:42:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 ************************************ 00:05:45.818 END TEST nvme_mount 00:05:45.818 ************************************ 00:05:45.818 00:42:32 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:45.818 00:42:32 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.818 00:42:32 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.818 00:42:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 ************************************ 00:05:45.818 START TEST dm_mount 00:05:45.818 ************************************ 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.818 00:42:32 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:46.759 Creating new GPT entries in memory. 00:05:46.759 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.759 other utilities. 00:05:46.759 00:42:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.759 00:42:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.759 00:42:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.759 00:42:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.759 00:42:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:47.698 Creating new GPT entries in memory. 00:05:47.698 The operation has completed successfully. 00:05:47.698 00:42:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:47.698 00:42:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.698 00:42:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.698 00:42:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.698 00:42:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:49.081 The operation has completed successfully. 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3262501 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.081 00:42:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:51.619 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.879 00:42:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:55.173 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:55.173 00:05:55.173 real 0m9.283s 00:05:55.173 user 0m1.993s 00:05:55.173 sys 0m3.869s 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.173 00:42:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:55.173 ************************************ 00:05:55.173 END TEST dm_mount 00:05:55.173 ************************************ 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.173 00:42:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.433 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:55.433 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:05:55.433 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:55.433 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.433 00:42:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:55.433 00:05:55.433 real 0m24.403s 00:05:55.433 user 0m6.042s 00:05:55.433 sys 0m11.726s 00:05:55.433 00:42:42 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.433 00:42:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:55.433 ************************************ 00:05:55.433 END TEST devices 00:05:55.433 ************************************ 00:05:55.433 00:05:55.433 real 1m22.956s 00:05:55.433 user 0m22.665s 00:05:55.433 sys 0m44.014s 00:05:55.433 00:42:42 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.433 00:42:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:55.433 ************************************ 00:05:55.433 END TEST setup.sh 00:05:55.433 ************************************ 00:05:55.433 00:42:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:05:58.725 Hugepages 00:05:58.725 node hugesize free / total 00:05:58.725 node0 1048576kB 0 / 0 00:05:58.725 node0 2048kB 2048 / 2048 00:05:58.725 node1 1048576kB 0 / 0 00:05:58.725 node1 2048kB 0 / 0 00:05:58.725 00:05:58.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.725 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:05:58.725 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:05:58.725 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:05:58.725 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:05:58.725 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:05:58.725 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:05:58.725 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:05:58.725 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:05:58.725 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:05:58.725 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:05:58.725 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:05:58.725 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:05:58.725 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:05:58.725 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:05:58.725 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:05:58.725 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:05:58.725 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:05:58.725 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:05:58.725 00:42:45 -- spdk/autotest.sh@130 -- # uname -s 00:05:58.725 00:42:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:58.725 00:42:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:58.725 00:42:45 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:06:01.365 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.365 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:01.365 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:06:01.935 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:06:02.196 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:06:02.456 00:42:49 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:03.397 00:42:50 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:03.397 00:42:50 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:03.397 00:42:50 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:03.397 00:42:50 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:03.397 00:42:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:03.397 00:42:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:03.397 00:42:50 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.397 00:42:50 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:03.397 00:42:50 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:03.655 00:42:50 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:06:03.655 00:42:50 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:06:03.655 00:42:50 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.948 Waiting for block devices as requested 00:06:06.948 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:06:06.948 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:06.948 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:06.948 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:06.948 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:06:06.948 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:06.948 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.206 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:07.206 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.206 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:07.206 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.466 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.466 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.466 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:07.466 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.726 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:06:07.726 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:06:07.726 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:06:07.985 00:42:54 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:07.985 00:42:54 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:03:00.0 00:06:07.985 00:42:54 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:07.985 00:42:54 -- common/autotest_common.sh@1498 -- # grep 0000:03:00.0/nvme/nvme 00:06:07.985 00:42:55 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:06:07.985 00:42:55 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 ]] 00:06:07.985 00:42:55 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:06:07.985 00:42:55 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:07.985 00:42:55 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:07.985 00:42:55 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:07.986 00:42:55 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:07.986 00:42:55 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:07.986 00:42:55 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:07.986 00:42:55 -- common/autotest_common.sh@1541 -- # oacs=' 0x5e' 00:06:07.986 00:42:55 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:07.986 00:42:55 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:07.986 00:42:55 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:07.986 00:42:55 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:07.986 00:42:55 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:07.986 00:42:55 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:07.986 00:42:55 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:07.986 00:42:55 -- common/autotest_common.sh@1553 -- # continue 00:06:07.986 00:42:55 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:07.986 00:42:55 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:06:07.986 00:42:55 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:07.986 00:42:55 -- common/autotest_common.sh@1498 -- # grep 0000:c9:00.0/nvme/nvme 00:06:07.986 00:42:55 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:06:07.986 00:42:55 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:06:07.986 00:42:55 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:06:08.246 00:42:55 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:08.246 00:42:55 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:08.246 00:42:55 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:08.246 00:42:55 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:08.246 00:42:55 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:08.246 00:42:55 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:08.246 00:42:55 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:06:08.246 00:42:55 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:08.246 00:42:55 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:08.246 00:42:55 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:08.246 00:42:55 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:08.246 00:42:55 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:08.246 00:42:55 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:08.246 00:42:55 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:08.246 00:42:55 -- common/autotest_common.sh@1553 -- # continue 00:06:08.246 00:42:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:08.246 00:42:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.246 00:42:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.246 00:42:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:08.246 00:42:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.246 00:42:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.246 00:42:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:06:11.544 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:06:11.544 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:06:11.544 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:06:12.115 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:06:12.376 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:06:12.376 00:42:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:12.376 00:42:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.376 00:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.637 00:42:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:12.637 00:42:59 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:12.637 00:42:59 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:12.637 00:42:59 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:12.637 00:42:59 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:12.637 00:42:59 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:12.637 00:42:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:12.637 00:42:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:12.637 00:42:59 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:12.637 00:42:59 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:12.637 00:42:59 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:12.637 00:42:59 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:06:12.637 00:42:59 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:06:12.637 00:42:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:12.637 00:42:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:03:00.0/device 00:06:12.637 00:42:59 -- common/autotest_common.sh@1576 -- # device=0x51c3 00:06:12.637 00:42:59 -- common/autotest_common.sh@1577 -- # [[ 0x51c3 == \0\x\0\a\5\4 ]] 00:06:12.637 00:42:59 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:12.637 00:42:59 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:06:12.637 00:42:59 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:06:12.637 00:42:59 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:12.637 00:42:59 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:12.637 00:42:59 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:12.637 00:42:59 -- common/autotest_common.sh@1589 -- # return 0 00:06:12.637 00:42:59 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:12.637 00:42:59 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:12.637 00:42:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:12.637 00:42:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:12.637 00:42:59 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:12.637 00:42:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:12.637 00:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.637 00:42:59 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:06:12.637 00:42:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.637 00:42:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.637 00:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.637 ************************************ 00:06:12.637 START TEST env 00:06:12.637 ************************************ 00:06:12.637 00:42:59 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:06:12.898 * Looking for test storage... 00:06:12.898 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:06:12.898 00:42:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:06:12.898 00:42:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.898 00:42:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.898 00:42:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.898 ************************************ 00:06:12.898 START TEST env_memory 00:06:12.898 ************************************ 00:06:12.898 00:42:59 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:06:12.898 00:06:12.898 00:06:12.898 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.898 http://cunit.sourceforge.net/ 00:06:12.898 00:06:12.898 00:06:12.898 Suite: memory 00:06:12.898 Test: alloc and free memory map ...[2024-05-15 00:42:59.782402] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:12.898 passed 00:06:12.898 Test: mem map translation ...[2024-05-15 00:42:59.808711] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:12.898 [2024-05-15 00:42:59.808737] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:12.898 [2024-05-15 00:42:59.808783] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:12.898 [2024-05-15 00:42:59.808797] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:12.898 passed 00:06:12.898 Test: mem map registration ...[2024-05-15 00:42:59.856796] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:12.898 [2024-05-15 00:42:59.856814] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:12.898 passed 00:06:12.898 Test: mem map adjacent registrations ...passed 00:06:12.898 00:06:12.898 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.898 suites 1 1 n/a 0 0 00:06:12.898 tests 4 4 4 0 0 00:06:12.898 asserts 152 152 152 0 n/a 00:06:12.898 00:06:12.898 Elapsed time = 0.163 seconds 00:06:12.898 00:06:12.898 real 0m0.178s 00:06:12.898 user 0m0.169s 00:06:12.898 sys 0m0.009s 00:06:12.898 00:42:59 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.898 00:42:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:12.898 ************************************ 00:06:12.898 END TEST env_memory 00:06:12.898 ************************************ 00:06:13.160 00:42:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.160 00:42:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.160 00:42:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.160 00:42:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.160 ************************************ 00:06:13.160 START TEST env_vtophys 00:06:13.160 ************************************ 00:06:13.160 00:42:59 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.160 EAL: lib.eal log level changed from notice to debug 00:06:13.160 EAL: Detected lcore 0 as core 0 on socket 0 00:06:13.160 EAL: Detected lcore 1 as core 1 on socket 0 00:06:13.160 EAL: Detected lcore 2 as core 2 on socket 0 00:06:13.160 EAL: Detected lcore 3 as core 3 on socket 0 00:06:13.160 EAL: Detected lcore 4 as core 4 on socket 0 00:06:13.160 EAL: Detected lcore 5 as core 5 on socket 0 00:06:13.160 EAL: Detected lcore 6 as core 6 on socket 0 00:06:13.160 EAL: Detected lcore 7 as core 7 on socket 0 00:06:13.160 EAL: Detected lcore 8 as core 8 on socket 0 00:06:13.160 EAL: Detected lcore 9 as core 9 on socket 0 00:06:13.160 EAL: Detected lcore 10 as core 10 on socket 0 00:06:13.160 EAL: Detected lcore 11 as core 11 on socket 0 00:06:13.160 EAL: Detected lcore 12 as core 12 on socket 0 00:06:13.160 EAL: Detected lcore 13 as core 13 on socket 0 00:06:13.160 EAL: Detected lcore 14 as core 14 on socket 0 00:06:13.160 EAL: Detected lcore 15 as core 15 on socket 0 00:06:13.160 EAL: Detected lcore 16 as core 16 on socket 0 00:06:13.160 EAL: Detected lcore 17 as core 17 on socket 0 00:06:13.160 EAL: Detected lcore 18 as core 18 on socket 0 00:06:13.160 EAL: Detected lcore 19 as core 19 on socket 0 00:06:13.160 EAL: Detected lcore 20 as core 20 on socket 0 00:06:13.160 EAL: Detected lcore 21 as core 21 on socket 0 00:06:13.160 EAL: Detected lcore 22 as core 22 on socket 0 00:06:13.160 EAL: Detected lcore 23 as core 23 on socket 0 00:06:13.160 EAL: Detected lcore 24 as core 24 on socket 0 00:06:13.160 EAL: Detected lcore 25 as core 25 on socket 0 00:06:13.160 EAL: Detected lcore 26 as core 26 on socket 0 00:06:13.160 EAL: Detected lcore 27 as core 27 on socket 0 00:06:13.160 EAL: Detected lcore 28 as core 28 on socket 0 00:06:13.160 EAL: Detected lcore 29 as core 29 on socket 0 00:06:13.160 EAL: Detected lcore 30 as core 30 on socket 0 00:06:13.160 EAL: Detected lcore 31 as core 31 on socket 0 00:06:13.160 EAL: Detected lcore 32 as core 0 on socket 1 00:06:13.160 EAL: Detected lcore 33 as core 1 on socket 1 00:06:13.160 EAL: Detected lcore 34 as core 2 on socket 1 00:06:13.160 EAL: Detected lcore 35 as core 3 on socket 1 00:06:13.160 EAL: Detected lcore 36 as core 4 on socket 1 00:06:13.160 EAL: Detected lcore 37 as core 5 on socket 1 00:06:13.160 EAL: Detected lcore 38 as core 6 on socket 1 00:06:13.160 EAL: Detected lcore 39 as core 7 on socket 1 00:06:13.160 EAL: Detected lcore 40 as core 8 on socket 1 00:06:13.160 EAL: Detected lcore 41 as core 9 on socket 1 00:06:13.160 EAL: Detected lcore 42 as core 10 on socket 1 00:06:13.160 EAL: Detected lcore 43 as core 11 on socket 1 00:06:13.160 EAL: Detected lcore 44 as core 12 on socket 1 00:06:13.160 EAL: Detected lcore 45 as core 13 on socket 1 00:06:13.160 EAL: Detected lcore 46 as core 14 on socket 1 00:06:13.160 EAL: Detected lcore 47 as core 15 on socket 1 00:06:13.160 EAL: Detected lcore 48 as core 16 on socket 1 00:06:13.160 EAL: Detected lcore 49 as core 17 on socket 1 00:06:13.160 EAL: Detected lcore 50 as core 18 on socket 1 00:06:13.160 EAL: Detected lcore 51 as core 19 on socket 1 00:06:13.160 EAL: Detected lcore 52 as core 20 on socket 1 00:06:13.160 EAL: Detected lcore 53 as core 21 on socket 1 00:06:13.160 EAL: Detected lcore 54 as core 22 on socket 1 00:06:13.160 EAL: Detected lcore 55 as core 23 on socket 1 00:06:13.160 EAL: Detected lcore 56 as core 24 on socket 1 00:06:13.160 EAL: Detected lcore 57 as core 25 on socket 1 00:06:13.160 EAL: Detected lcore 58 as core 26 on socket 1 00:06:13.160 EAL: Detected lcore 59 as core 27 on socket 1 00:06:13.160 EAL: Detected lcore 60 as core 28 on socket 1 00:06:13.160 EAL: Detected lcore 61 as core 29 on socket 1 00:06:13.160 EAL: Detected lcore 62 as core 30 on socket 1 00:06:13.160 EAL: Detected lcore 63 as core 31 on socket 1 00:06:13.160 EAL: Detected lcore 64 as core 0 on socket 0 00:06:13.160 EAL: Detected lcore 65 as core 1 on socket 0 00:06:13.160 EAL: Detected lcore 66 as core 2 on socket 0 00:06:13.160 EAL: Detected lcore 67 as core 3 on socket 0 00:06:13.160 EAL: Detected lcore 68 as core 4 on socket 0 00:06:13.161 EAL: Detected lcore 69 as core 5 on socket 0 00:06:13.161 EAL: Detected lcore 70 as core 6 on socket 0 00:06:13.161 EAL: Detected lcore 71 as core 7 on socket 0 00:06:13.161 EAL: Detected lcore 72 as core 8 on socket 0 00:06:13.161 EAL: Detected lcore 73 as core 9 on socket 0 00:06:13.161 EAL: Detected lcore 74 as core 10 on socket 0 00:06:13.161 EAL: Detected lcore 75 as core 11 on socket 0 00:06:13.161 EAL: Detected lcore 76 as core 12 on socket 0 00:06:13.161 EAL: Detected lcore 77 as core 13 on socket 0 00:06:13.161 EAL: Detected lcore 78 as core 14 on socket 0 00:06:13.161 EAL: Detected lcore 79 as core 15 on socket 0 00:06:13.161 EAL: Detected lcore 80 as core 16 on socket 0 00:06:13.161 EAL: Detected lcore 81 as core 17 on socket 0 00:06:13.161 EAL: Detected lcore 82 as core 18 on socket 0 00:06:13.161 EAL: Detected lcore 83 as core 19 on socket 0 00:06:13.161 EAL: Detected lcore 84 as core 20 on socket 0 00:06:13.161 EAL: Detected lcore 85 as core 21 on socket 0 00:06:13.161 EAL: Detected lcore 86 as core 22 on socket 0 00:06:13.161 EAL: Detected lcore 87 as core 23 on socket 0 00:06:13.161 EAL: Detected lcore 88 as core 24 on socket 0 00:06:13.161 EAL: Detected lcore 89 as core 25 on socket 0 00:06:13.161 EAL: Detected lcore 90 as core 26 on socket 0 00:06:13.161 EAL: Detected lcore 91 as core 27 on socket 0 00:06:13.161 EAL: Detected lcore 92 as core 28 on socket 0 00:06:13.161 EAL: Detected lcore 93 as core 29 on socket 0 00:06:13.161 EAL: Detected lcore 94 as core 30 on socket 0 00:06:13.161 EAL: Detected lcore 95 as core 31 on socket 0 00:06:13.161 EAL: Detected lcore 96 as core 0 on socket 1 00:06:13.161 EAL: Detected lcore 97 as core 1 on socket 1 00:06:13.161 EAL: Detected lcore 98 as core 2 on socket 1 00:06:13.161 EAL: Detected lcore 99 as core 3 on socket 1 00:06:13.161 EAL: Detected lcore 100 as core 4 on socket 1 00:06:13.161 EAL: Detected lcore 101 as core 5 on socket 1 00:06:13.161 EAL: Detected lcore 102 as core 6 on socket 1 00:06:13.161 EAL: Detected lcore 103 as core 7 on socket 1 00:06:13.161 EAL: Detected lcore 104 as core 8 on socket 1 00:06:13.161 EAL: Detected lcore 105 as core 9 on socket 1 00:06:13.161 EAL: Detected lcore 106 as core 10 on socket 1 00:06:13.161 EAL: Detected lcore 107 as core 11 on socket 1 00:06:13.161 EAL: Detected lcore 108 as core 12 on socket 1 00:06:13.161 EAL: Detected lcore 109 as core 13 on socket 1 00:06:13.161 EAL: Detected lcore 110 as core 14 on socket 1 00:06:13.161 EAL: Detected lcore 111 as core 15 on socket 1 00:06:13.161 EAL: Detected lcore 112 as core 16 on socket 1 00:06:13.161 EAL: Detected lcore 113 as core 17 on socket 1 00:06:13.161 EAL: Detected lcore 114 as core 18 on socket 1 00:06:13.161 EAL: Detected lcore 115 as core 19 on socket 1 00:06:13.161 EAL: Detected lcore 116 as core 20 on socket 1 00:06:13.161 EAL: Detected lcore 117 as core 21 on socket 1 00:06:13.161 EAL: Detected lcore 118 as core 22 on socket 1 00:06:13.161 EAL: Detected lcore 119 as core 23 on socket 1 00:06:13.161 EAL: Detected lcore 120 as core 24 on socket 1 00:06:13.161 EAL: Detected lcore 121 as core 25 on socket 1 00:06:13.161 EAL: Detected lcore 122 as core 26 on socket 1 00:06:13.161 EAL: Detected lcore 123 as core 27 on socket 1 00:06:13.161 EAL: Detected lcore 124 as core 28 on socket 1 00:06:13.161 EAL: Detected lcore 125 as core 29 on socket 1 00:06:13.161 EAL: Detected lcore 126 as core 30 on socket 1 00:06:13.161 EAL: Detected lcore 127 as core 31 on socket 1 00:06:13.161 EAL: Maximum logical cores by configuration: 128 00:06:13.161 EAL: Detected CPU lcores: 128 00:06:13.161 EAL: Detected NUMA nodes: 2 00:06:13.161 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:13.161 EAL: Detected shared linkage of DPDK 00:06:13.161 EAL: No shared files mode enabled, IPC will be disabled 00:06:13.161 EAL: Bus pci wants IOVA as 'DC' 00:06:13.161 EAL: Buses did not request a specific IOVA mode. 00:06:13.161 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:13.161 EAL: Selected IOVA mode 'VA' 00:06:13.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.161 EAL: Probing VFIO support... 00:06:13.161 EAL: IOMMU type 1 (Type 1) is supported 00:06:13.161 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:13.161 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:13.161 EAL: VFIO support initialized 00:06:13.161 EAL: Ask a virtual area of 0x2e000 bytes 00:06:13.161 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:13.161 EAL: Setting up physically contiguous memory... 00:06:13.161 EAL: Setting maximum number of open files to 524288 00:06:13.161 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:13.161 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:13.161 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:13.161 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:13.161 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.161 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:13.161 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.161 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.161 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:13.161 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:13.161 EAL: Hugepages will be freed exactly as allocated. 00:06:13.161 EAL: No shared files mode enabled, IPC is disabled 00:06:13.161 EAL: No shared files mode enabled, IPC is disabled 00:06:13.161 EAL: TSC frequency is ~1900000 KHz 00:06:13.161 EAL: Main lcore 0 is ready (tid=7effe1ad5a40;cpuset=[0]) 00:06:13.161 EAL: Trying to obtain current memory policy. 00:06:13.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.161 EAL: Restoring previous memory policy: 0 00:06:13.161 EAL: request: mp_malloc_sync 00:06:13.161 EAL: No shared files mode enabled, IPC is disabled 00:06:13.161 EAL: Heap on socket 0 was expanded by 2MB 00:06:13.161 EAL: No shared files mode enabled, IPC is disabled 00:06:13.161 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:13.161 EAL: Mem event callback 'spdk:(nil)' registered 00:06:13.161 00:06:13.161 00:06:13.161 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.161 http://cunit.sourceforge.net/ 00:06:13.161 00:06:13.161 00:06:13.161 Suite: components_suite 00:06:13.422 Test: vtophys_malloc_test ...passed 00:06:13.422 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:13.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.422 EAL: Restoring previous memory policy: 4 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was expanded by 4MB 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was shrunk by 4MB 00:06:13.422 EAL: Trying to obtain current memory policy. 00:06:13.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.422 EAL: Restoring previous memory policy: 4 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was expanded by 6MB 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was shrunk by 6MB 00:06:13.422 EAL: Trying to obtain current memory policy. 00:06:13.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.422 EAL: Restoring previous memory policy: 4 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was expanded by 10MB 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was shrunk by 10MB 00:06:13.422 EAL: Trying to obtain current memory policy. 00:06:13.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.422 EAL: Restoring previous memory policy: 4 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was expanded by 18MB 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.422 EAL: No shared files mode enabled, IPC is disabled 00:06:13.422 EAL: Heap on socket 0 was shrunk by 18MB 00:06:13.422 EAL: Trying to obtain current memory policy. 00:06:13.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.422 EAL: Restoring previous memory policy: 4 00:06:13.422 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.422 EAL: request: mp_malloc_sync 00:06:13.423 EAL: No shared files mode enabled, IPC is disabled 00:06:13.423 EAL: Heap on socket 0 was expanded by 34MB 00:06:13.423 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.423 EAL: request: mp_malloc_sync 00:06:13.423 EAL: No shared files mode enabled, IPC is disabled 00:06:13.423 EAL: Heap on socket 0 was shrunk by 34MB 00:06:13.423 EAL: Trying to obtain current memory policy. 00:06:13.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.423 EAL: Restoring previous memory policy: 4 00:06:13.423 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.423 EAL: request: mp_malloc_sync 00:06:13.423 EAL: No shared files mode enabled, IPC is disabled 00:06:13.423 EAL: Heap on socket 0 was expanded by 66MB 00:06:13.683 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.683 EAL: request: mp_malloc_sync 00:06:13.683 EAL: No shared files mode enabled, IPC is disabled 00:06:13.683 EAL: Heap on socket 0 was shrunk by 66MB 00:06:13.683 EAL: Trying to obtain current memory policy. 00:06:13.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.683 EAL: Restoring previous memory policy: 4 00:06:13.683 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.683 EAL: request: mp_malloc_sync 00:06:13.683 EAL: No shared files mode enabled, IPC is disabled 00:06:13.683 EAL: Heap on socket 0 was expanded by 130MB 00:06:13.683 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.683 EAL: request: mp_malloc_sync 00:06:13.683 EAL: No shared files mode enabled, IPC is disabled 00:06:13.683 EAL: Heap on socket 0 was shrunk by 130MB 00:06:13.683 EAL: Trying to obtain current memory policy. 00:06:13.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.683 EAL: Restoring previous memory policy: 4 00:06:13.683 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.683 EAL: request: mp_malloc_sync 00:06:13.683 EAL: No shared files mode enabled, IPC is disabled 00:06:13.683 EAL: Heap on socket 0 was expanded by 258MB 00:06:13.942 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.942 EAL: request: mp_malloc_sync 00:06:13.942 EAL: No shared files mode enabled, IPC is disabled 00:06:13.942 EAL: Heap on socket 0 was shrunk by 258MB 00:06:14.201 EAL: Trying to obtain current memory policy. 00:06:14.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.201 EAL: Restoring previous memory policy: 4 00:06:14.201 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.201 EAL: request: mp_malloc_sync 00:06:14.201 EAL: No shared files mode enabled, IPC is disabled 00:06:14.201 EAL: Heap on socket 0 was expanded by 514MB 00:06:14.461 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.461 EAL: request: mp_malloc_sync 00:06:14.461 EAL: No shared files mode enabled, IPC is disabled 00:06:14.461 EAL: Heap on socket 0 was shrunk by 514MB 00:06:14.721 EAL: Trying to obtain current memory policy. 00:06:14.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.982 EAL: Restoring previous memory policy: 4 00:06:14.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.982 EAL: request: mp_malloc_sync 00:06:14.982 EAL: No shared files mode enabled, IPC is disabled 00:06:14.982 EAL: Heap on socket 0 was expanded by 1026MB 00:06:15.553 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.553 EAL: request: mp_malloc_sync 00:06:15.553 EAL: No shared files mode enabled, IPC is disabled 00:06:15.553 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:16.123 passed 00:06:16.123 00:06:16.123 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.123 suites 1 1 n/a 0 0 00:06:16.123 tests 2 2 2 0 0 00:06:16.123 asserts 497 497 497 0 n/a 00:06:16.123 00:06:16.123 Elapsed time = 2.791 seconds 00:06:16.123 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.123 EAL: request: mp_malloc_sync 00:06:16.123 EAL: No shared files mode enabled, IPC is disabled 00:06:16.123 EAL: Heap on socket 0 was shrunk by 2MB 00:06:16.123 EAL: No shared files mode enabled, IPC is disabled 00:06:16.123 EAL: No shared files mode enabled, IPC is disabled 00:06:16.123 EAL: No shared files mode enabled, IPC is disabled 00:06:16.123 00:06:16.123 real 0m3.017s 00:06:16.123 user 0m2.363s 00:06:16.123 sys 0m0.606s 00:06:16.123 00:43:03 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.123 00:43:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:16.123 ************************************ 00:06:16.123 END TEST env_vtophys 00:06:16.123 ************************************ 00:06:16.123 00:43:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:06:16.123 00:43:03 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.123 00:43:03 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.123 00:43:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.123 ************************************ 00:06:16.123 START TEST env_pci 00:06:16.123 ************************************ 00:06:16.123 00:43:03 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:06:16.123 00:06:16.123 00:06:16.123 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.123 http://cunit.sourceforge.net/ 00:06:16.123 00:06:16.123 00:06:16.123 Suite: pci 00:06:16.123 Test: pci_hook ...[2024-05-15 00:43:03.109089] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3273976 has claimed it 00:06:16.123 EAL: Cannot find device (10000:00:01.0) 00:06:16.123 EAL: Failed to attach device on primary process 00:06:16.123 passed 00:06:16.123 00:06:16.123 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.123 suites 1 1 n/a 0 0 00:06:16.123 tests 1 1 1 0 0 00:06:16.123 asserts 25 25 25 0 n/a 00:06:16.123 00:06:16.123 Elapsed time = 0.059 seconds 00:06:16.383 00:06:16.383 real 0m0.119s 00:06:16.383 user 0m0.051s 00:06:16.383 sys 0m0.067s 00:06:16.383 00:43:03 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.383 00:43:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:16.383 ************************************ 00:06:16.383 END TEST env_pci 00:06:16.383 ************************************ 00:06:16.383 00:43:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:16.383 00:43:03 env -- env/env.sh@15 -- # uname 00:06:16.383 00:43:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:16.383 00:43:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:16.383 00:43:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.383 00:43:03 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:16.383 00:43:03 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.383 00:43:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.383 ************************************ 00:06:16.383 START TEST env_dpdk_post_init 00:06:16.383 ************************************ 00:06:16.383 00:43:03 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.383 EAL: Detected CPU lcores: 128 00:06:16.383 EAL: Detected NUMA nodes: 2 00:06:16.383 EAL: Detected shared linkage of DPDK 00:06:16.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:16.383 EAL: Selected IOVA mode 'VA' 00:06:16.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.383 EAL: VFIO support initialized 00:06:16.383 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:16.644 EAL: Using IOMMU type 1 (Type 1) 00:06:16.904 EAL: Probe PCI driver: spdk_nvme (1344:51c3) device: 0000:03:00.0 (socket 0) 00:06:16.904 EAL: Ignore mapping IO port bar(1) 00:06:16.904 EAL: Ignore mapping IO port bar(3) 00:06:16.904 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:06:17.165 EAL: Ignore mapping IO port bar(1) 00:06:17.165 EAL: Ignore mapping IO port bar(3) 00:06:17.165 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:06:17.425 EAL: Ignore mapping IO port bar(1) 00:06:17.425 EAL: Ignore mapping IO port bar(3) 00:06:17.425 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:06:17.685 EAL: Ignore mapping IO port bar(1) 00:06:17.685 EAL: Ignore mapping IO port bar(3) 00:06:17.685 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:06:17.685 EAL: Ignore mapping IO port bar(1) 00:06:17.685 EAL: Ignore mapping IO port bar(3) 00:06:17.945 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:06:17.945 EAL: Ignore mapping IO port bar(1) 00:06:17.945 EAL: Ignore mapping IO port bar(3) 00:06:18.204 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:06:18.204 EAL: Ignore mapping IO port bar(1) 00:06:18.204 EAL: Ignore mapping IO port bar(3) 00:06:18.465 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:06:18.465 EAL: Ignore mapping IO port bar(1) 00:06:18.465 EAL: Ignore mapping IO port bar(3) 00:06:18.465 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:06:18.725 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:c9:00.0 (socket 1) 00:06:18.985 EAL: Ignore mapping IO port bar(1) 00:06:18.985 EAL: Ignore mapping IO port bar(3) 00:06:18.985 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:06:19.246 EAL: Ignore mapping IO port bar(1) 00:06:19.246 EAL: Ignore mapping IO port bar(3) 00:06:19.246 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:06:19.506 EAL: Ignore mapping IO port bar(1) 00:06:19.506 EAL: Ignore mapping IO port bar(3) 00:06:19.506 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:06:19.506 EAL: Ignore mapping IO port bar(1) 00:06:19.506 EAL: Ignore mapping IO port bar(3) 00:06:19.766 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:06:19.766 EAL: Ignore mapping IO port bar(1) 00:06:19.766 EAL: Ignore mapping IO port bar(3) 00:06:20.027 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:06:20.027 EAL: Ignore mapping IO port bar(1) 00:06:20.027 EAL: Ignore mapping IO port bar(3) 00:06:20.287 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:06:20.287 EAL: Ignore mapping IO port bar(1) 00:06:20.287 EAL: Ignore mapping IO port bar(3) 00:06:20.287 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:06:20.547 EAL: Ignore mapping IO port bar(1) 00:06:20.547 EAL: Ignore mapping IO port bar(3) 00:06:20.547 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:06:21.488 EAL: Releasing PCI mapped resource for 0000:03:00.0 00:06:21.488 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x202001000000 00:06:21.488 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:06:21.488 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x2020011c0000 00:06:21.746 Starting DPDK initialization... 00:06:21.746 Starting SPDK post initialization... 00:06:21.746 SPDK NVMe probe 00:06:21.746 Attaching to 0000:03:00.0 00:06:21.746 Attaching to 0000:c9:00.0 00:06:21.746 Attached to 0000:c9:00.0 00:06:21.746 Attached to 0000:03:00.0 00:06:21.746 Cleaning up... 00:06:23.192 00:06:23.192 real 0m6.964s 00:06:23.192 user 0m1.077s 00:06:23.192 sys 0m0.189s 00:06:23.192 00:43:10 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.192 00:43:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.192 ************************************ 00:06:23.192 END TEST env_dpdk_post_init 00:06:23.192 ************************************ 00:06:23.453 00:43:10 env -- env/env.sh@26 -- # uname 00:06:23.453 00:43:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:23.453 00:43:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.454 00:43:10 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.454 00:43:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.454 00:43:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.454 ************************************ 00:06:23.454 START TEST env_mem_callbacks 00:06:23.454 ************************************ 00:06:23.454 00:43:10 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.454 EAL: Detected CPU lcores: 128 00:06:23.454 EAL: Detected NUMA nodes: 2 00:06:23.454 EAL: Detected shared linkage of DPDK 00:06:23.454 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:23.454 EAL: Selected IOVA mode 'VA' 00:06:23.454 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.454 EAL: VFIO support initialized 00:06:23.454 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:23.454 00:06:23.454 00:06:23.454 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.454 http://cunit.sourceforge.net/ 00:06:23.454 00:06:23.454 00:06:23.454 Suite: memory 00:06:23.454 Test: test ... 00:06:23.454 register 0x200000200000 2097152 00:06:23.454 malloc 3145728 00:06:23.454 register 0x200000400000 4194304 00:06:23.454 buf 0x2000004fffc0 len 3145728 PASSED 00:06:23.454 malloc 64 00:06:23.454 buf 0x2000004ffec0 len 64 PASSED 00:06:23.454 malloc 4194304 00:06:23.454 register 0x200000800000 6291456 00:06:23.454 buf 0x2000009fffc0 len 4194304 PASSED 00:06:23.454 free 0x2000004fffc0 3145728 00:06:23.454 free 0x2000004ffec0 64 00:06:23.454 unregister 0x200000400000 4194304 PASSED 00:06:23.454 free 0x2000009fffc0 4194304 00:06:23.454 unregister 0x200000800000 6291456 PASSED 00:06:23.454 malloc 8388608 00:06:23.454 register 0x200000400000 10485760 00:06:23.454 buf 0x2000005fffc0 len 8388608 PASSED 00:06:23.454 free 0x2000005fffc0 8388608 00:06:23.454 unregister 0x200000400000 10485760 PASSED 00:06:23.454 passed 00:06:23.454 00:06:23.454 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.454 suites 1 1 n/a 0 0 00:06:23.454 tests 1 1 1 0 0 00:06:23.454 asserts 15 15 15 0 n/a 00:06:23.454 00:06:23.454 Elapsed time = 0.024 seconds 00:06:23.454 00:06:23.454 real 0m0.166s 00:06:23.454 user 0m0.066s 00:06:23.454 sys 0m0.098s 00:06:23.454 00:43:10 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.454 00:43:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:23.454 ************************************ 00:06:23.454 END TEST env_mem_callbacks 00:06:23.454 ************************************ 00:06:23.714 00:06:23.714 real 0m10.876s 00:06:23.714 user 0m3.852s 00:06:23.715 sys 0m1.290s 00:06:23.715 00:43:10 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.715 00:43:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.715 ************************************ 00:06:23.715 END TEST env 00:06:23.715 ************************************ 00:06:23.715 00:43:10 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:06:23.715 00:43:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.715 00:43:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.715 00:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:23.715 ************************************ 00:06:23.715 START TEST rpc 00:06:23.715 ************************************ 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:06:23.715 * Looking for test storage... 00:06:23.715 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:06:23.715 00:43:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3275572 00:06:23.715 00:43:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.715 00:43:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3275572 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@827 -- # '[' -z 3275572 ']' 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.715 00:43:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.715 00:43:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:23.715 [2024-05-15 00:43:10.739464] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:23.715 [2024-05-15 00:43:10.739594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275572 ] 00:06:23.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.975 [2024-05-15 00:43:10.870308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.975 [2024-05-15 00:43:10.964542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:23.975 [2024-05-15 00:43:10.964593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3275572' to capture a snapshot of events at runtime. 00:06:23.975 [2024-05-15 00:43:10.964605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.975 [2024-05-15 00:43:10.964614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.975 [2024-05-15 00:43:10.964623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3275572 for offline analysis/debug. 00:06:23.975 [2024-05-15 00:43:10.964656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.543 00:43:11 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.543 00:43:11 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:24.543 00:43:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:06:24.543 00:43:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:06:24.543 00:43:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:24.543 00:43:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:24.543 00:43:11 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.543 00:43:11 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.543 00:43:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.543 ************************************ 00:06:24.543 START TEST rpc_integrity 00:06:24.543 ************************************ 00:06:24.543 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:24.543 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.543 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.543 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.543 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.543 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.543 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.803 { 00:06:24.803 "name": "Malloc0", 00:06:24.803 "aliases": [ 00:06:24.803 "fb95fad7-23f5-4c59-a7d3-cb1799afbd56" 00:06:24.803 ], 00:06:24.803 "product_name": "Malloc disk", 00:06:24.803 "block_size": 512, 00:06:24.803 "num_blocks": 16384, 00:06:24.803 "uuid": "fb95fad7-23f5-4c59-a7d3-cb1799afbd56", 00:06:24.803 "assigned_rate_limits": { 00:06:24.803 "rw_ios_per_sec": 0, 00:06:24.803 "rw_mbytes_per_sec": 0, 00:06:24.803 "r_mbytes_per_sec": 0, 00:06:24.803 "w_mbytes_per_sec": 0 00:06:24.803 }, 00:06:24.803 "claimed": false, 00:06:24.803 "zoned": false, 00:06:24.803 "supported_io_types": { 00:06:24.803 "read": true, 00:06:24.803 "write": true, 00:06:24.803 "unmap": true, 00:06:24.803 "write_zeroes": true, 00:06:24.803 "flush": true, 00:06:24.803 "reset": true, 00:06:24.803 "compare": false, 00:06:24.803 "compare_and_write": false, 00:06:24.803 "abort": true, 00:06:24.803 "nvme_admin": false, 00:06:24.803 "nvme_io": false 00:06:24.803 }, 00:06:24.803 "memory_domains": [ 00:06:24.803 { 00:06:24.803 "dma_device_id": "system", 00:06:24.803 "dma_device_type": 1 00:06:24.803 }, 00:06:24.803 { 00:06:24.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.803 "dma_device_type": 2 00:06:24.803 } 00:06:24.803 ], 00:06:24.803 "driver_specific": {} 00:06:24.803 } 00:06:24.803 ]' 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.803 [2024-05-15 00:43:11.677621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:24.803 [2024-05-15 00:43:11.677677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.803 [2024-05-15 00:43:11.677706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:06:24.803 [2024-05-15 00:43:11.677718] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.803 [2024-05-15 00:43:11.679446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.803 [2024-05-15 00:43:11.679477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.803 Passthru0 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.803 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.803 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.803 { 00:06:24.803 "name": "Malloc0", 00:06:24.803 "aliases": [ 00:06:24.803 "fb95fad7-23f5-4c59-a7d3-cb1799afbd56" 00:06:24.803 ], 00:06:24.803 "product_name": "Malloc disk", 00:06:24.803 "block_size": 512, 00:06:24.803 "num_blocks": 16384, 00:06:24.803 "uuid": "fb95fad7-23f5-4c59-a7d3-cb1799afbd56", 00:06:24.803 "assigned_rate_limits": { 00:06:24.803 "rw_ios_per_sec": 0, 00:06:24.803 "rw_mbytes_per_sec": 0, 00:06:24.803 "r_mbytes_per_sec": 0, 00:06:24.803 "w_mbytes_per_sec": 0 00:06:24.803 }, 00:06:24.803 "claimed": true, 00:06:24.803 "claim_type": "exclusive_write", 00:06:24.803 "zoned": false, 00:06:24.803 "supported_io_types": { 00:06:24.803 "read": true, 00:06:24.803 "write": true, 00:06:24.803 "unmap": true, 00:06:24.803 "write_zeroes": true, 00:06:24.803 "flush": true, 00:06:24.804 "reset": true, 00:06:24.804 "compare": false, 00:06:24.804 "compare_and_write": false, 00:06:24.804 "abort": true, 00:06:24.804 "nvme_admin": false, 00:06:24.804 "nvme_io": false 00:06:24.804 }, 00:06:24.804 "memory_domains": [ 00:06:24.804 { 00:06:24.804 "dma_device_id": "system", 00:06:24.804 "dma_device_type": 1 00:06:24.804 }, 00:06:24.804 { 00:06:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.804 "dma_device_type": 2 00:06:24.804 } 00:06:24.804 ], 00:06:24.804 "driver_specific": {} 00:06:24.804 }, 00:06:24.804 { 00:06:24.804 "name": "Passthru0", 00:06:24.804 "aliases": [ 00:06:24.804 "c30ff7ba-7f1a-5749-88bc-a7859f2e47a2" 00:06:24.804 ], 00:06:24.804 "product_name": "passthru", 00:06:24.804 "block_size": 512, 00:06:24.804 "num_blocks": 16384, 00:06:24.804 "uuid": "c30ff7ba-7f1a-5749-88bc-a7859f2e47a2", 00:06:24.804 "assigned_rate_limits": { 00:06:24.804 "rw_ios_per_sec": 0, 00:06:24.804 "rw_mbytes_per_sec": 0, 00:06:24.804 "r_mbytes_per_sec": 0, 00:06:24.804 "w_mbytes_per_sec": 0 00:06:24.804 }, 00:06:24.804 "claimed": false, 00:06:24.804 "zoned": false, 00:06:24.804 "supported_io_types": { 00:06:24.804 "read": true, 00:06:24.804 "write": true, 00:06:24.804 "unmap": true, 00:06:24.804 "write_zeroes": true, 00:06:24.804 "flush": true, 00:06:24.804 "reset": true, 00:06:24.804 "compare": false, 00:06:24.804 "compare_and_write": false, 00:06:24.804 "abort": true, 00:06:24.804 "nvme_admin": false, 00:06:24.804 "nvme_io": false 00:06:24.804 }, 00:06:24.804 "memory_domains": [ 00:06:24.804 { 00:06:24.804 "dma_device_id": "system", 00:06:24.804 "dma_device_type": 1 00:06:24.804 }, 00:06:24.804 { 00:06:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.804 "dma_device_type": 2 00:06:24.804 } 00:06:24.804 ], 00:06:24.804 "driver_specific": { 00:06:24.804 "passthru": { 00:06:24.804 "name": "Passthru0", 00:06:24.804 "base_bdev_name": "Malloc0" 00:06:24.804 } 00:06:24.804 } 00:06:24.804 } 00:06:24.804 ]' 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.804 00:43:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.804 00:06:24.804 real 0m0.245s 00:06:24.804 user 0m0.137s 00:06:24.804 sys 0m0.031s 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.804 00:43:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.804 ************************************ 00:06:24.804 END TEST rpc_integrity 00:06:24.804 ************************************ 00:06:24.804 00:43:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:24.804 00:43:11 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.804 00:43:11 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.804 00:43:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.070 ************************************ 00:06:25.070 START TEST rpc_plugins 00:06:25.070 ************************************ 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:25.070 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.070 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:25.070 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.070 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:25.071 { 00:06:25.071 "name": "Malloc1", 00:06:25.071 "aliases": [ 00:06:25.071 "05a54998-9004-4038-82cb-fad1f38e830d" 00:06:25.071 ], 00:06:25.071 "product_name": "Malloc disk", 00:06:25.071 "block_size": 4096, 00:06:25.071 "num_blocks": 256, 00:06:25.071 "uuid": "05a54998-9004-4038-82cb-fad1f38e830d", 00:06:25.071 "assigned_rate_limits": { 00:06:25.071 "rw_ios_per_sec": 0, 00:06:25.071 "rw_mbytes_per_sec": 0, 00:06:25.071 "r_mbytes_per_sec": 0, 00:06:25.071 "w_mbytes_per_sec": 0 00:06:25.071 }, 00:06:25.071 "claimed": false, 00:06:25.071 "zoned": false, 00:06:25.071 "supported_io_types": { 00:06:25.071 "read": true, 00:06:25.071 "write": true, 00:06:25.071 "unmap": true, 00:06:25.071 "write_zeroes": true, 00:06:25.071 "flush": true, 00:06:25.071 "reset": true, 00:06:25.071 "compare": false, 00:06:25.071 "compare_and_write": false, 00:06:25.071 "abort": true, 00:06:25.071 "nvme_admin": false, 00:06:25.071 "nvme_io": false 00:06:25.071 }, 00:06:25.071 "memory_domains": [ 00:06:25.071 { 00:06:25.071 "dma_device_id": "system", 00:06:25.071 "dma_device_type": 1 00:06:25.071 }, 00:06:25.071 { 00:06:25.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.071 "dma_device_type": 2 00:06:25.071 } 00:06:25.071 ], 00:06:25.071 "driver_specific": {} 00:06:25.071 } 00:06:25.071 ]' 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:25.071 00:43:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:25.071 00:06:25.071 real 0m0.115s 00:06:25.071 user 0m0.066s 00:06:25.071 sys 0m0.015s 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.071 00:43:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.071 ************************************ 00:06:25.071 END TEST rpc_plugins 00:06:25.071 ************************************ 00:06:25.071 00:43:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:25.071 00:43:12 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.071 00:43:12 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.071 00:43:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.071 ************************************ 00:06:25.071 START TEST rpc_trace_cmd_test 00:06:25.071 ************************************ 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:25.071 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3275572", 00:06:25.071 "tpoint_group_mask": "0x8", 00:06:25.071 "iscsi_conn": { 00:06:25.071 "mask": "0x2", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "scsi": { 00:06:25.071 "mask": "0x4", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "bdev": { 00:06:25.071 "mask": "0x8", 00:06:25.071 "tpoint_mask": "0xffffffffffffffff" 00:06:25.071 }, 00:06:25.071 "nvmf_rdma": { 00:06:25.071 "mask": "0x10", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "nvmf_tcp": { 00:06:25.071 "mask": "0x20", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "ftl": { 00:06:25.071 "mask": "0x40", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "blobfs": { 00:06:25.071 "mask": "0x80", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "dsa": { 00:06:25.071 "mask": "0x200", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "thread": { 00:06:25.071 "mask": "0x400", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "nvme_pcie": { 00:06:25.071 "mask": "0x800", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "iaa": { 00:06:25.071 "mask": "0x1000", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "nvme_tcp": { 00:06:25.071 "mask": "0x2000", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "bdev_nvme": { 00:06:25.071 "mask": "0x4000", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 }, 00:06:25.071 "sock": { 00:06:25.071 "mask": "0x8000", 00:06:25.071 "tpoint_mask": "0x0" 00:06:25.071 } 00:06:25.071 }' 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:25.071 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:25.335 00:06:25.335 real 0m0.182s 00:06:25.335 user 0m0.151s 00:06:25.335 sys 0m0.023s 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.335 00:43:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 ************************************ 00:06:25.335 END TEST rpc_trace_cmd_test 00:06:25.335 ************************************ 00:06:25.335 00:43:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:25.335 00:43:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:25.335 00:43:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:25.335 00:43:12 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.335 00:43:12 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.335 00:43:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 ************************************ 00:06:25.335 START TEST rpc_daemon_integrity 00:06:25.335 ************************************ 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.335 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.335 { 00:06:25.335 "name": "Malloc2", 00:06:25.335 "aliases": [ 00:06:25.335 "810db34a-3248-4809-80e8-e3e4bb9b6a88" 00:06:25.335 ], 00:06:25.335 "product_name": "Malloc disk", 00:06:25.335 "block_size": 512, 00:06:25.335 "num_blocks": 16384, 00:06:25.335 "uuid": "810db34a-3248-4809-80e8-e3e4bb9b6a88", 00:06:25.336 "assigned_rate_limits": { 00:06:25.336 "rw_ios_per_sec": 0, 00:06:25.336 "rw_mbytes_per_sec": 0, 00:06:25.336 "r_mbytes_per_sec": 0, 00:06:25.336 "w_mbytes_per_sec": 0 00:06:25.336 }, 00:06:25.336 "claimed": false, 00:06:25.336 "zoned": false, 00:06:25.336 "supported_io_types": { 00:06:25.336 "read": true, 00:06:25.336 "write": true, 00:06:25.336 "unmap": true, 00:06:25.336 "write_zeroes": true, 00:06:25.336 "flush": true, 00:06:25.336 "reset": true, 00:06:25.336 "compare": false, 00:06:25.336 "compare_and_write": false, 00:06:25.336 "abort": true, 00:06:25.336 "nvme_admin": false, 00:06:25.336 "nvme_io": false 00:06:25.336 }, 00:06:25.336 "memory_domains": [ 00:06:25.336 { 00:06:25.336 "dma_device_id": "system", 00:06:25.336 "dma_device_type": 1 00:06:25.336 }, 00:06:25.336 { 00:06:25.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.336 "dma_device_type": 2 00:06:25.336 } 00:06:25.336 ], 00:06:25.336 "driver_specific": {} 00:06:25.336 } 00:06:25.336 ]' 00:06:25.336 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:25.336 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:25.336 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:25.336 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.336 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 [2024-05-15 00:43:12.396470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:25.597 [2024-05-15 00:43:12.396513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.597 [2024-05-15 00:43:12.396539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:06:25.597 [2024-05-15 00:43:12.396548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.597 [2024-05-15 00:43:12.398262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.597 [2024-05-15 00:43:12.398289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:25.597 Passthru0 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:25.597 { 00:06:25.597 "name": "Malloc2", 00:06:25.597 "aliases": [ 00:06:25.597 "810db34a-3248-4809-80e8-e3e4bb9b6a88" 00:06:25.597 ], 00:06:25.597 "product_name": "Malloc disk", 00:06:25.597 "block_size": 512, 00:06:25.597 "num_blocks": 16384, 00:06:25.597 "uuid": "810db34a-3248-4809-80e8-e3e4bb9b6a88", 00:06:25.597 "assigned_rate_limits": { 00:06:25.597 "rw_ios_per_sec": 0, 00:06:25.597 "rw_mbytes_per_sec": 0, 00:06:25.597 "r_mbytes_per_sec": 0, 00:06:25.597 "w_mbytes_per_sec": 0 00:06:25.597 }, 00:06:25.597 "claimed": true, 00:06:25.597 "claim_type": "exclusive_write", 00:06:25.597 "zoned": false, 00:06:25.597 "supported_io_types": { 00:06:25.597 "read": true, 00:06:25.597 "write": true, 00:06:25.597 "unmap": true, 00:06:25.597 "write_zeroes": true, 00:06:25.597 "flush": true, 00:06:25.597 "reset": true, 00:06:25.597 "compare": false, 00:06:25.597 "compare_and_write": false, 00:06:25.597 "abort": true, 00:06:25.597 "nvme_admin": false, 00:06:25.597 "nvme_io": false 00:06:25.597 }, 00:06:25.597 "memory_domains": [ 00:06:25.597 { 00:06:25.597 "dma_device_id": "system", 00:06:25.597 "dma_device_type": 1 00:06:25.597 }, 00:06:25.597 { 00:06:25.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.597 "dma_device_type": 2 00:06:25.597 } 00:06:25.597 ], 00:06:25.597 "driver_specific": {} 00:06:25.597 }, 00:06:25.597 { 00:06:25.597 "name": "Passthru0", 00:06:25.597 "aliases": [ 00:06:25.597 "256496f1-d7b3-5c26-a40d-b7ba248b963a" 00:06:25.597 ], 00:06:25.597 "product_name": "passthru", 00:06:25.597 "block_size": 512, 00:06:25.597 "num_blocks": 16384, 00:06:25.597 "uuid": "256496f1-d7b3-5c26-a40d-b7ba248b963a", 00:06:25.597 "assigned_rate_limits": { 00:06:25.597 "rw_ios_per_sec": 0, 00:06:25.597 "rw_mbytes_per_sec": 0, 00:06:25.597 "r_mbytes_per_sec": 0, 00:06:25.597 "w_mbytes_per_sec": 0 00:06:25.597 }, 00:06:25.597 "claimed": false, 00:06:25.597 "zoned": false, 00:06:25.597 "supported_io_types": { 00:06:25.597 "read": true, 00:06:25.597 "write": true, 00:06:25.597 "unmap": true, 00:06:25.597 "write_zeroes": true, 00:06:25.597 "flush": true, 00:06:25.597 "reset": true, 00:06:25.597 "compare": false, 00:06:25.597 "compare_and_write": false, 00:06:25.597 "abort": true, 00:06:25.597 "nvme_admin": false, 00:06:25.597 "nvme_io": false 00:06:25.597 }, 00:06:25.597 "memory_domains": [ 00:06:25.597 { 00:06:25.597 "dma_device_id": "system", 00:06:25.597 "dma_device_type": 1 00:06:25.597 }, 00:06:25.597 { 00:06:25.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.597 "dma_device_type": 2 00:06:25.597 } 00:06:25.597 ], 00:06:25.597 "driver_specific": { 00:06:25.597 "passthru": { 00:06:25.597 "name": "Passthru0", 00:06:25.597 "base_bdev_name": "Malloc2" 00:06:25.597 } 00:06:25.597 } 00:06:25.597 } 00:06:25.597 ]' 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.597 00:06:25.597 real 0m0.221s 00:06:25.597 user 0m0.129s 00:06:25.597 sys 0m0.034s 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.597 00:43:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.597 ************************************ 00:06:25.597 END TEST rpc_daemon_integrity 00:06:25.597 ************************************ 00:06:25.597 00:43:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:25.597 00:43:12 rpc -- rpc/rpc.sh@84 -- # killprocess 3275572 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@946 -- # '[' -z 3275572 ']' 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@950 -- # kill -0 3275572 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@951 -- # uname 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3275572 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3275572' 00:06:25.597 killing process with pid 3275572 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@965 -- # kill 3275572 00:06:25.597 00:43:12 rpc -- common/autotest_common.sh@970 -- # wait 3275572 00:06:26.538 00:06:26.538 real 0m2.850s 00:06:26.538 user 0m3.332s 00:06:26.538 sys 0m0.758s 00:06:26.538 00:43:13 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.538 00:43:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.538 ************************************ 00:06:26.538 END TEST rpc 00:06:26.538 ************************************ 00:06:26.538 00:43:13 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:26.538 00:43:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.538 00:43:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.538 00:43:13 -- common/autotest_common.sh@10 -- # set +x 00:06:26.538 ************************************ 00:06:26.538 START TEST skip_rpc 00:06:26.538 ************************************ 00:06:26.538 00:43:13 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:26.538 * Looking for test storage... 00:06:26.538 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:06:26.538 00:43:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:06:26.538 00:43:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:06:26.538 00:43:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:26.538 00:43:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.538 00:43:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.538 00:43:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.538 ************************************ 00:06:26.538 START TEST skip_rpc 00:06:26.538 ************************************ 00:06:26.538 00:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:26.538 00:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3276342 00:06:26.538 00:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.538 00:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:26.538 00:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:26.804 [2024-05-15 00:43:13.695975] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:26.804 [2024-05-15 00:43:13.696110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276342 ] 00:06:26.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.804 [2024-05-15 00:43:13.828909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.064 [2024-05-15 00:43:13.925776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3276342 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3276342 ']' 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3276342 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3276342 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3276342' 00:06:32.343 killing process with pid 3276342 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3276342 00:06:32.343 00:43:18 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3276342 00:06:32.603 00:06:32.603 real 0m5.919s 00:06:32.603 user 0m5.589s 00:06:32.603 sys 0m0.341s 00:06:32.603 00:43:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.603 00:43:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.603 ************************************ 00:06:32.603 END TEST skip_rpc 00:06:32.603 ************************************ 00:06:32.603 00:43:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.603 00:43:19 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.603 00:43:19 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.603 00:43:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.603 ************************************ 00:06:32.603 START TEST skip_rpc_with_json 00:06:32.603 ************************************ 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3277549 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3277549 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3277549 ']' 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.603 00:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.603 [2024-05-15 00:43:19.639220] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:32.603 [2024-05-15 00:43:19.639303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277549 ] 00:06:32.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.863 [2024-05-15 00:43:19.730793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.863 [2024-05-15 00:43:19.822770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.434 [2024-05-15 00:43:20.353876] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.434 request: 00:06:33.434 { 00:06:33.434 "trtype": "tcp", 00:06:33.434 "method": "nvmf_get_transports", 00:06:33.434 "req_id": 1 00:06:33.434 } 00:06:33.434 Got JSON-RPC error response 00:06:33.434 response: 00:06:33.434 { 00:06:33.434 "code": -19, 00:06:33.434 "message": "No such device" 00:06:33.434 } 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.434 [2024-05-15 00:43:20.361971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.434 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.693 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.693 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:06:33.693 { 00:06:33.693 "subsystems": [ 00:06:33.693 { 00:06:33.693 "subsystem": "keyring", 00:06:33.693 "config": [] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "iobuf", 00:06:33.693 "config": [ 00:06:33.693 { 00:06:33.693 "method": "iobuf_set_options", 00:06:33.693 "params": { 00:06:33.693 "small_pool_count": 8192, 00:06:33.693 "large_pool_count": 1024, 00:06:33.693 "small_bufsize": 8192, 00:06:33.693 "large_bufsize": 135168 00:06:33.693 } 00:06:33.693 } 00:06:33.693 ] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "sock", 00:06:33.693 "config": [ 00:06:33.693 { 00:06:33.693 "method": "sock_impl_set_options", 00:06:33.693 "params": { 00:06:33.693 "impl_name": "posix", 00:06:33.693 "recv_buf_size": 2097152, 00:06:33.693 "send_buf_size": 2097152, 00:06:33.693 "enable_recv_pipe": true, 00:06:33.693 "enable_quickack": false, 00:06:33.693 "enable_placement_id": 0, 00:06:33.693 "enable_zerocopy_send_server": true, 00:06:33.693 "enable_zerocopy_send_client": false, 00:06:33.693 "zerocopy_threshold": 0, 00:06:33.693 "tls_version": 0, 00:06:33.693 "enable_ktls": false 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "sock_impl_set_options", 00:06:33.693 "params": { 00:06:33.693 "impl_name": "ssl", 00:06:33.693 "recv_buf_size": 4096, 00:06:33.693 "send_buf_size": 4096, 00:06:33.693 "enable_recv_pipe": true, 00:06:33.693 "enable_quickack": false, 00:06:33.693 "enable_placement_id": 0, 00:06:33.693 "enable_zerocopy_send_server": true, 00:06:33.693 "enable_zerocopy_send_client": false, 00:06:33.693 "zerocopy_threshold": 0, 00:06:33.693 "tls_version": 0, 00:06:33.693 "enable_ktls": false 00:06:33.693 } 00:06:33.693 } 00:06:33.693 ] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "vmd", 00:06:33.693 "config": [] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "accel", 00:06:33.693 "config": [ 00:06:33.693 { 00:06:33.693 "method": "accel_set_options", 00:06:33.693 "params": { 00:06:33.693 "small_cache_size": 128, 00:06:33.693 "large_cache_size": 16, 00:06:33.693 "task_count": 2048, 00:06:33.693 "sequence_count": 2048, 00:06:33.693 "buf_count": 2048 00:06:33.693 } 00:06:33.693 } 00:06:33.693 ] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "bdev", 00:06:33.693 "config": [ 00:06:33.693 { 00:06:33.693 "method": "bdev_set_options", 00:06:33.693 "params": { 00:06:33.693 "bdev_io_pool_size": 65535, 00:06:33.693 "bdev_io_cache_size": 256, 00:06:33.693 "bdev_auto_examine": true, 00:06:33.693 "iobuf_small_cache_size": 128, 00:06:33.693 "iobuf_large_cache_size": 16 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "bdev_raid_set_options", 00:06:33.693 "params": { 00:06:33.693 "process_window_size_kb": 1024 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "bdev_iscsi_set_options", 00:06:33.693 "params": { 00:06:33.693 "timeout_sec": 30 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "bdev_nvme_set_options", 00:06:33.693 "params": { 00:06:33.693 "action_on_timeout": "none", 00:06:33.693 "timeout_us": 0, 00:06:33.693 "timeout_admin_us": 0, 00:06:33.693 "keep_alive_timeout_ms": 10000, 00:06:33.693 "arbitration_burst": 0, 00:06:33.693 "low_priority_weight": 0, 00:06:33.693 "medium_priority_weight": 0, 00:06:33.693 "high_priority_weight": 0, 00:06:33.693 "nvme_adminq_poll_period_us": 10000, 00:06:33.693 "nvme_ioq_poll_period_us": 0, 00:06:33.693 "io_queue_requests": 0, 00:06:33.693 "delay_cmd_submit": true, 00:06:33.693 "transport_retry_count": 4, 00:06:33.693 "bdev_retry_count": 3, 00:06:33.693 "transport_ack_timeout": 0, 00:06:33.693 "ctrlr_loss_timeout_sec": 0, 00:06:33.693 "reconnect_delay_sec": 0, 00:06:33.693 "fast_io_fail_timeout_sec": 0, 00:06:33.693 "disable_auto_failback": false, 00:06:33.693 "generate_uuids": false, 00:06:33.693 "transport_tos": 0, 00:06:33.693 "nvme_error_stat": false, 00:06:33.693 "rdma_srq_size": 0, 00:06:33.693 "io_path_stat": false, 00:06:33.693 "allow_accel_sequence": false, 00:06:33.693 "rdma_max_cq_size": 0, 00:06:33.693 "rdma_cm_event_timeout_ms": 0, 00:06:33.693 "dhchap_digests": [ 00:06:33.693 "sha256", 00:06:33.693 "sha384", 00:06:33.693 "sha512" 00:06:33.693 ], 00:06:33.693 "dhchap_dhgroups": [ 00:06:33.693 "null", 00:06:33.693 "ffdhe2048", 00:06:33.693 "ffdhe3072", 00:06:33.693 "ffdhe4096", 00:06:33.693 "ffdhe6144", 00:06:33.693 "ffdhe8192" 00:06:33.693 ] 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "bdev_nvme_set_hotplug", 00:06:33.693 "params": { 00:06:33.693 "period_us": 100000, 00:06:33.693 "enable": false 00:06:33.693 } 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "method": "bdev_wait_for_examine" 00:06:33.693 } 00:06:33.693 ] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "scsi", 00:06:33.693 "config": null 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "scheduler", 00:06:33.693 "config": [ 00:06:33.693 { 00:06:33.693 "method": "framework_set_scheduler", 00:06:33.693 "params": { 00:06:33.693 "name": "static" 00:06:33.693 } 00:06:33.693 } 00:06:33.693 ] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "vhost_scsi", 00:06:33.693 "config": [] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "vhost_blk", 00:06:33.693 "config": [] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "ublk", 00:06:33.693 "config": [] 00:06:33.693 }, 00:06:33.693 { 00:06:33.693 "subsystem": "nbd", 00:06:33.693 "config": [] 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "subsystem": "nvmf", 00:06:33.694 "config": [ 00:06:33.694 { 00:06:33.694 "method": "nvmf_set_config", 00:06:33.694 "params": { 00:06:33.694 "discovery_filter": "match_any", 00:06:33.694 "admin_cmd_passthru": { 00:06:33.694 "identify_ctrlr": false 00:06:33.694 } 00:06:33.694 } 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "method": "nvmf_set_max_subsystems", 00:06:33.694 "params": { 00:06:33.694 "max_subsystems": 1024 00:06:33.694 } 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "method": "nvmf_set_crdt", 00:06:33.694 "params": { 00:06:33.694 "crdt1": 0, 00:06:33.694 "crdt2": 0, 00:06:33.694 "crdt3": 0 00:06:33.694 } 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "method": "nvmf_create_transport", 00:06:33.694 "params": { 00:06:33.694 "trtype": "TCP", 00:06:33.694 "max_queue_depth": 128, 00:06:33.694 "max_io_qpairs_per_ctrlr": 127, 00:06:33.694 "in_capsule_data_size": 4096, 00:06:33.694 "max_io_size": 131072, 00:06:33.694 "io_unit_size": 131072, 00:06:33.694 "max_aq_depth": 128, 00:06:33.694 "num_shared_buffers": 511, 00:06:33.694 "buf_cache_size": 4294967295, 00:06:33.694 "dif_insert_or_strip": false, 00:06:33.694 "zcopy": false, 00:06:33.694 "c2h_success": true, 00:06:33.694 "sock_priority": 0, 00:06:33.694 "abort_timeout_sec": 1, 00:06:33.694 "ack_timeout": 0, 00:06:33.694 "data_wr_pool_size": 0 00:06:33.694 } 00:06:33.694 } 00:06:33.694 ] 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "subsystem": "iscsi", 00:06:33.694 "config": [ 00:06:33.694 { 00:06:33.694 "method": "iscsi_set_options", 00:06:33.694 "params": { 00:06:33.694 "node_base": "iqn.2016-06.io.spdk", 00:06:33.694 "max_sessions": 128, 00:06:33.694 "max_connections_per_session": 2, 00:06:33.694 "max_queue_depth": 64, 00:06:33.694 "default_time2wait": 2, 00:06:33.694 "default_time2retain": 20, 00:06:33.694 "first_burst_length": 8192, 00:06:33.694 "immediate_data": true, 00:06:33.694 "allow_duplicated_isid": false, 00:06:33.694 "error_recovery_level": 0, 00:06:33.694 "nop_timeout": 60, 00:06:33.694 "nop_in_interval": 30, 00:06:33.694 "disable_chap": false, 00:06:33.694 "require_chap": false, 00:06:33.694 "mutual_chap": false, 00:06:33.694 "chap_group": 0, 00:06:33.694 "max_large_datain_per_connection": 64, 00:06:33.694 "max_r2t_per_connection": 4, 00:06:33.694 "pdu_pool_size": 36864, 00:06:33.694 "immediate_data_pool_size": 16384, 00:06:33.694 "data_out_pool_size": 2048 00:06:33.694 } 00:06:33.694 } 00:06:33.694 ] 00:06:33.694 } 00:06:33.694 ] 00:06:33.694 } 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3277549 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3277549 ']' 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3277549 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3277549 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3277549' 00:06:33.694 killing process with pid 3277549 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3277549 00:06:33.694 00:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3277549 00:06:34.633 00:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3277859 00:06:34.633 00:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:34.633 00:43:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3277859 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3277859 ']' 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3277859 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3277859 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3277859' 00:06:39.931 killing process with pid 3277859 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3277859 00:06:39.931 00:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3277859 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:06:40.502 00:06:40.502 real 0m7.715s 00:06:40.502 user 0m7.325s 00:06:40.502 sys 0m0.709s 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 ************************************ 00:06:40.502 END TEST skip_rpc_with_json 00:06:40.502 ************************************ 00:06:40.502 00:43:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 ************************************ 00:06:40.502 START TEST skip_rpc_with_delay 00:06:40.502 ************************************ 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:40.502 [2024-05-15 00:43:27.448007] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:40.502 [2024-05-15 00:43:27.448155] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.502 00:06:40.502 real 0m0.130s 00:06:40.502 user 0m0.060s 00:06:40.502 sys 0m0.069s 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.502 00:43:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 ************************************ 00:06:40.502 END TEST skip_rpc_with_delay 00:06:40.502 ************************************ 00:06:40.502 00:43:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:40.502 00:43:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:40.502 00:43:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.502 00:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 ************************************ 00:06:40.502 START TEST exit_on_failed_rpc_init 00:06:40.502 ************************************ 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3279097 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3279097 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3279097 ']' 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.502 00:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.763 [2024-05-15 00:43:27.640105] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:40.763 [2024-05-15 00:43:27.640223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279097 ] 00:06:40.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.763 [2024-05-15 00:43:27.758714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.022 [2024-05-15 00:43:27.849902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:41.283 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:41.542 [2024-05-15 00:43:28.416006] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:41.542 [2024-05-15 00:43:28.416127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279164 ] 00:06:41.542 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.542 [2024-05-15 00:43:28.557018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.802 [2024-05-15 00:43:28.711078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.802 [2024-05-15 00:43:28.711188] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:41.802 [2024-05-15 00:43:28.711211] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:41.802 [2024-05-15 00:43:28.711231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3279097 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3279097 ']' 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3279097 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.063 00:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3279097 00:06:42.063 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.063 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.063 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3279097' 00:06:42.063 killing process with pid 3279097 00:06:42.063 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3279097 00:06:42.063 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3279097 00:06:43.004 00:06:43.004 real 0m2.314s 00:06:43.004 user 0m2.693s 00:06:43.004 sys 0m0.563s 00:06:43.004 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.004 00:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.004 ************************************ 00:06:43.004 END TEST exit_on_failed_rpc_init 00:06:43.004 ************************************ 00:06:43.004 00:43:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:06:43.004 00:06:43.004 real 0m16.412s 00:06:43.004 user 0m15.786s 00:06:43.004 sys 0m1.908s 00:06:43.004 00:43:29 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.004 00:43:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.004 ************************************ 00:06:43.004 END TEST skip_rpc 00:06:43.004 ************************************ 00:06:43.004 00:43:29 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:43.004 00:43:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.004 00:43:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.004 00:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.004 ************************************ 00:06:43.004 START TEST rpc_client 00:06:43.004 ************************************ 00:06:43.004 00:43:29 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:43.004 * Looking for test storage... 00:06:43.004 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:06:43.004 00:43:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:43.265 OK 00:06:43.265 00:43:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:43.265 00:06:43.265 real 0m0.115s 00:06:43.265 user 0m0.040s 00:06:43.265 sys 0m0.080s 00:06:43.265 00:43:30 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.265 00:43:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:43.265 ************************************ 00:06:43.265 END TEST rpc_client 00:06:43.265 ************************************ 00:06:43.265 00:43:30 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:06:43.265 00:43:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.265 00:43:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.265 00:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.265 ************************************ 00:06:43.265 START TEST json_config 00:06:43.265 ************************************ 00:06:43.265 00:43:30 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:06:43.265 00:43:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.265 00:43:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:06:43.265 00:43:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.265 00:43:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.265 00:43:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.265 00:43:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.265 00:43:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.266 00:43:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.266 00:43:30 json_config -- paths/export.sh@5 -- # export PATH 00:06:43.266 00:43:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@47 -- # : 0 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.266 00:43:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:43.266 INFO: JSON configuration test init 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.266 00:43:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:43.266 00:43:30 json_config -- json_config/common.sh@9 -- # local app=target 00:06:43.266 00:43:30 json_config -- json_config/common.sh@10 -- # shift 00:06:43.266 00:43:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.266 00:43:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.266 00:43:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.266 00:43:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.266 00:43:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.266 00:43:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3279803 00:06:43.266 00:43:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.266 Waiting for target to run... 00:06:43.266 00:43:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3279803 /var/tmp/spdk_tgt.sock 00:06:43.266 00:43:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@827 -- # '[' -z 3279803 ']' 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.266 00:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.266 [2024-05-15 00:43:30.293473] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:43.266 [2024-05-15 00:43:30.293587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279803 ] 00:06:43.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.800 [2024-05-15 00:43:30.593551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.800 [2024-05-15 00:43:30.673629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:44.060 00:43:31 json_config -- json_config/common.sh@26 -- # echo '' 00:06:44.060 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.060 00:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:44.060 00:43:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:44.060 00:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:45.475 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:45.475 00:43:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.475 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.475 MallocForNvmf0 00:06:45.475 00:43:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.475 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.735 MallocForNvmf1 00:06:45.735 00:43:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.736 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.736 [2024-05-15 00:43:32.783827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.996 00:43:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.996 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.996 00:43:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:45.996 00:43:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:46.256 00:43:33 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:46.256 00:43:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:46.256 00:43:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.256 00:43:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.516 [2024-05-15 00:43:33.319928] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:46.516 [2024-05-15 00:43:33.320353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:46.516 00:43:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.516 00:43:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.516 00:43:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:46.516 00:43:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.516 00:43:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.516 MallocBdevForConfigChangeCheck 00:06:46.516 00:43:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.516 00:43:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.776 00:43:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:46.776 00:43:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.036 00:43:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:47.036 INFO: shutting down applications... 00:06:47.036 00:43:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:47.036 00:43:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:47.036 00:43:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:47.036 00:43:33 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:48.946 Calling clear_iscsi_subsystem 00:06:48.946 Calling clear_nvmf_subsystem 00:06:48.946 Calling clear_nbd_subsystem 00:06:48.946 Calling clear_ublk_subsystem 00:06:48.946 Calling clear_vhost_blk_subsystem 00:06:48.946 Calling clear_vhost_scsi_subsystem 00:06:48.946 Calling clear_bdev_subsystem 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:48.946 00:43:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:49.205 00:43:36 json_config -- json_config/json_config.sh@345 -- # break 00:06:49.205 00:43:36 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:49.205 00:43:36 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:49.205 00:43:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:49.205 00:43:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.205 00:43:36 json_config -- json_config/common.sh@35 -- # [[ -n 3279803 ]] 00:06:49.205 00:43:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3279803 00:06:49.205 [2024-05-15 00:43:36.088054] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:49.205 00:43:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.205 00:43:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.205 00:43:36 json_config -- json_config/common.sh@41 -- # kill -0 3279803 00:06:49.205 00:43:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.775 00:43:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.775 00:43:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.775 00:43:36 json_config -- json_config/common.sh@41 -- # kill -0 3279803 00:06:49.775 00:43:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:49.776 00:43:36 json_config -- json_config/common.sh@43 -- # break 00:06:49.776 00:43:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:49.776 00:43:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:49.776 SPDK target shutdown done 00:06:49.776 00:43:36 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:49.776 INFO: relaunching applications... 00:06:49.776 00:43:36 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.776 00:43:36 json_config -- json_config/common.sh@9 -- # local app=target 00:06:49.776 00:43:36 json_config -- json_config/common.sh@10 -- # shift 00:06:49.776 00:43:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:49.776 00:43:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:49.776 00:43:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:49.776 00:43:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.776 00:43:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.776 00:43:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3281119 00:06:49.776 00:43:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:49.776 Waiting for target to run... 00:06:49.776 00:43:36 json_config -- json_config/common.sh@25 -- # waitforlisten 3281119 /var/tmp/spdk_tgt.sock 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@827 -- # '[' -z 3281119 ']' 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:49.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.776 00:43:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.776 00:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.776 [2024-05-15 00:43:36.685610] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:49.776 [2024-05-15 00:43:36.685752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281119 ] 00:06:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.035 [2024-05-15 00:43:37.067460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.306 [2024-05-15 00:43:37.156335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.243 [2024-05-15 00:43:38.260872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.243 [2024-05-15 00:43:38.292780] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:51.243 [2024-05-15 00:43:38.293172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:51.503 00:43:38 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.503 00:43:38 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:51.503 00:43:38 json_config -- json_config/common.sh@26 -- # echo '' 00:06:51.503 00:06:51.503 00:43:38 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:51.503 00:43:38 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:51.503 INFO: Checking if target configuration is the same... 00:06:51.503 00:43:38 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.503 00:43:38 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:51.503 00:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.503 + '[' 2 -ne 2 ']' 00:06:51.503 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:51.503 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:51.503 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:51.503 +++ basename /dev/fd/62 00:06:51.503 ++ mktemp /tmp/62.XXX 00:06:51.503 + tmp_file_1=/tmp/62.7gh 00:06:51.503 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.503 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:51.503 + tmp_file_2=/tmp/spdk_tgt_config.json.hVu 00:06:51.503 + ret=0 00:06:51.503 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:51.762 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:51.762 + diff -u /tmp/62.7gh /tmp/spdk_tgt_config.json.hVu 00:06:51.762 + echo 'INFO: JSON config files are the same' 00:06:51.762 INFO: JSON config files are the same 00:06:51.762 + rm /tmp/62.7gh /tmp/spdk_tgt_config.json.hVu 00:06:51.762 + exit 0 00:06:51.762 00:43:38 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:51.762 00:43:38 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:51.762 INFO: changing configuration and checking if this can be detected... 00:06:51.762 00:43:38 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:51.762 00:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:51.762 00:43:38 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.762 00:43:38 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:51.762 00:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.762 + '[' 2 -ne 2 ']' 00:06:51.762 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:51.762 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:51.763 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:51.763 +++ basename /dev/fd/62 00:06:51.763 ++ mktemp /tmp/62.XXX 00:06:51.763 + tmp_file_1=/tmp/62.fZE 00:06:51.763 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.763 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:51.763 + tmp_file_2=/tmp/spdk_tgt_config.json.xne 00:06:51.763 + ret=0 00:06:51.763 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.022 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.022 + diff -u /tmp/62.fZE /tmp/spdk_tgt_config.json.xne 00:06:52.022 + ret=1 00:06:52.022 + echo '=== Start of file: /tmp/62.fZE ===' 00:06:52.022 + cat /tmp/62.fZE 00:06:52.281 + echo '=== End of file: /tmp/62.fZE ===' 00:06:52.281 + echo '' 00:06:52.281 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xne ===' 00:06:52.281 + cat /tmp/spdk_tgt_config.json.xne 00:06:52.281 + echo '=== End of file: /tmp/spdk_tgt_config.json.xne ===' 00:06:52.281 + echo '' 00:06:52.281 + rm /tmp/62.fZE /tmp/spdk_tgt_config.json.xne 00:06:52.281 + exit 1 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:52.281 INFO: configuration change detected. 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@317 -- # [[ -n 3281119 ]] 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.281 00:43:39 json_config -- json_config/json_config.sh@323 -- # killprocess 3281119 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@946 -- # '[' -z 3281119 ']' 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@950 -- # kill -0 3281119 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@951 -- # uname 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3281119 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3281119' 00:06:52.281 killing process with pid 3281119 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@965 -- # kill 3281119 00:06:52.281 [2024-05-15 00:43:39.179922] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:52.281 00:43:39 json_config -- common/autotest_common.sh@970 -- # wait 3281119 00:06:53.664 00:43:40 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.664 00:43:40 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:53.664 00:43:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.664 00:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 00:43:40 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:53.664 00:43:40 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:53.664 INFO: Success 00:06:53.664 00:06:53.664 real 0m10.415s 00:06:53.664 user 0m10.911s 00:06:53.664 sys 0m1.863s 00:06:53.664 00:43:40 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.664 00:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 ************************************ 00:06:53.664 END TEST json_config 00:06:53.664 ************************************ 00:06:53.664 00:43:40 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:53.664 00:43:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.664 00:43:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.664 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:53.665 ************************************ 00:06:53.665 START TEST json_config_extra_key 00:06:53.665 ************************************ 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:06:53.665 00:43:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.665 00:43:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.665 00:43:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.665 00:43:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.665 00:43:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.665 00:43:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.665 00:43:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:53.665 00:43:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.665 00:43:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:53.665 INFO: launching applications... 00:06:53.665 00:43:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3281962 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.665 Waiting for target to run... 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3281962 /var/tmp/spdk_tgt.sock 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3281962 ']' 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.665 00:43:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.665 00:43:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:53.926 [2024-05-15 00:43:40.771311] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:53.926 [2024-05-15 00:43:40.771443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281962 ] 00:06:53.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.186 [2024-05-15 00:43:41.081851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.186 [2024-05-15 00:43:41.162209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.757 00:43:41 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.757 00:43:41 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:54.757 00:06:54.757 00:43:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:54.757 INFO: shutting down applications... 00:06:54.757 00:43:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3281962 ]] 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3281962 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3281962 00:06:54.757 00:43:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.017 00:43:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.017 00:43:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.017 00:43:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3281962 00:06:55.017 00:43:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.585 00:43:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.585 00:43:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.585 00:43:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3281962 00:06:55.585 00:43:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:56.154 00:43:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:56.154 00:43:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.154 00:43:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3281962 00:06:56.155 00:43:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:56.155 00:43:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:56.155 00:43:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:56.155 00:43:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:56.155 SPDK target shutdown done 00:06:56.155 00:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:56.155 Success 00:06:56.155 00:06:56.155 real 0m2.427s 00:06:56.155 user 0m1.792s 00:06:56.155 sys 0m0.449s 00:06:56.155 00:43:43 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.155 00:43:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:56.155 ************************************ 00:06:56.155 END TEST json_config_extra_key 00:06:56.155 ************************************ 00:06:56.155 00:43:43 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.155 00:43:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.155 00:43:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.155 00:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:56.155 ************************************ 00:06:56.155 START TEST alias_rpc 00:06:56.155 ************************************ 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:56.155 * Looking for test storage... 00:06:56.155 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:06:56.155 00:43:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.155 00:43:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3282477 00:06:56.155 00:43:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3282477 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3282477 ']' 00:06:56.155 00:43:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.155 00:43:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.415 [2024-05-15 00:43:43.305755] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:56.415 [2024-05-15 00:43:43.305889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282477 ] 00:06:56.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.415 [2024-05-15 00:43:43.439270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.673 [2024-05-15 00:43:43.534880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:57.243 00:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:57.243 00:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3282477 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3282477 ']' 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3282477 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3282477 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3282477' 00:06:57.243 killing process with pid 3282477 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@965 -- # kill 3282477 00:06:57.243 00:43:44 alias_rpc -- common/autotest_common.sh@970 -- # wait 3282477 00:06:58.186 00:06:58.186 real 0m1.946s 00:06:58.186 user 0m1.920s 00:06:58.186 sys 0m0.492s 00:06:58.186 00:43:45 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.186 00:43:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.186 ************************************ 00:06:58.186 END TEST alias_rpc 00:06:58.186 ************************************ 00:06:58.186 00:43:45 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:58.186 00:43:45 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:58.186 00:43:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.186 00:43:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.186 00:43:45 -- common/autotest_common.sh@10 -- # set +x 00:06:58.186 ************************************ 00:06:58.186 START TEST spdkcli_tcp 00:06:58.186 ************************************ 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:58.186 * Looking for test storage... 00:06:58.186 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3282927 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3282927 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3282927 ']' 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.186 00:43:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.186 00:43:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:58.446 [2024-05-15 00:43:45.323842] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:58.446 [2024-05-15 00:43:45.323986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282927 ] 00:06:58.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.446 [2024-05-15 00:43:45.460404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.707 [2024-05-15 00:43:45.554659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.707 [2024-05-15 00:43:45.554683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.277 00:43:46 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.277 00:43:46 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:59.277 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3283130 00:06:59.277 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:59.277 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:59.277 [ 00:06:59.277 "bdev_malloc_delete", 00:06:59.277 "bdev_malloc_create", 00:06:59.277 "bdev_null_resize", 00:06:59.277 "bdev_null_delete", 00:06:59.277 "bdev_null_create", 00:06:59.277 "bdev_nvme_cuse_unregister", 00:06:59.277 "bdev_nvme_cuse_register", 00:06:59.277 "bdev_opal_new_user", 00:06:59.277 "bdev_opal_set_lock_state", 00:06:59.277 "bdev_opal_delete", 00:06:59.277 "bdev_opal_get_info", 00:06:59.277 "bdev_opal_create", 00:06:59.277 "bdev_nvme_opal_revert", 00:06:59.277 "bdev_nvme_opal_init", 00:06:59.278 "bdev_nvme_send_cmd", 00:06:59.278 "bdev_nvme_get_path_iostat", 00:06:59.278 "bdev_nvme_get_mdns_discovery_info", 00:06:59.278 "bdev_nvme_stop_mdns_discovery", 00:06:59.278 "bdev_nvme_start_mdns_discovery", 00:06:59.278 "bdev_nvme_set_multipath_policy", 00:06:59.278 "bdev_nvme_set_preferred_path", 00:06:59.278 "bdev_nvme_get_io_paths", 00:06:59.278 "bdev_nvme_remove_error_injection", 00:06:59.278 "bdev_nvme_add_error_injection", 00:06:59.278 "bdev_nvme_get_discovery_info", 00:06:59.278 "bdev_nvme_stop_discovery", 00:06:59.278 "bdev_nvme_start_discovery", 00:06:59.278 "bdev_nvme_get_controller_health_info", 00:06:59.278 "bdev_nvme_disable_controller", 00:06:59.278 "bdev_nvme_enable_controller", 00:06:59.278 "bdev_nvme_reset_controller", 00:06:59.278 "bdev_nvme_get_transport_statistics", 00:06:59.278 "bdev_nvme_apply_firmware", 00:06:59.278 "bdev_nvme_detach_controller", 00:06:59.278 "bdev_nvme_get_controllers", 00:06:59.278 "bdev_nvme_attach_controller", 00:06:59.278 "bdev_nvme_set_hotplug", 00:06:59.278 "bdev_nvme_set_options", 00:06:59.278 "bdev_passthru_delete", 00:06:59.278 "bdev_passthru_create", 00:06:59.278 "bdev_lvol_check_shallow_copy", 00:06:59.278 "bdev_lvol_start_shallow_copy", 00:06:59.278 "bdev_lvol_grow_lvstore", 00:06:59.278 "bdev_lvol_get_lvols", 00:06:59.278 "bdev_lvol_get_lvstores", 00:06:59.278 "bdev_lvol_delete", 00:06:59.278 "bdev_lvol_set_read_only", 00:06:59.278 "bdev_lvol_resize", 00:06:59.278 "bdev_lvol_decouple_parent", 00:06:59.278 "bdev_lvol_inflate", 00:06:59.278 "bdev_lvol_rename", 00:06:59.278 "bdev_lvol_clone_bdev", 00:06:59.278 "bdev_lvol_clone", 00:06:59.278 "bdev_lvol_snapshot", 00:06:59.278 "bdev_lvol_create", 00:06:59.278 "bdev_lvol_delete_lvstore", 00:06:59.278 "bdev_lvol_rename_lvstore", 00:06:59.278 "bdev_lvol_create_lvstore", 00:06:59.278 "bdev_raid_set_options", 00:06:59.278 "bdev_raid_remove_base_bdev", 00:06:59.278 "bdev_raid_add_base_bdev", 00:06:59.278 "bdev_raid_delete", 00:06:59.278 "bdev_raid_create", 00:06:59.278 "bdev_raid_get_bdevs", 00:06:59.278 "bdev_error_inject_error", 00:06:59.278 "bdev_error_delete", 00:06:59.278 "bdev_error_create", 00:06:59.278 "bdev_split_delete", 00:06:59.278 "bdev_split_create", 00:06:59.278 "bdev_delay_delete", 00:06:59.278 "bdev_delay_create", 00:06:59.278 "bdev_delay_update_latency", 00:06:59.278 "bdev_zone_block_delete", 00:06:59.278 "bdev_zone_block_create", 00:06:59.278 "blobfs_create", 00:06:59.278 "blobfs_detect", 00:06:59.278 "blobfs_set_cache_size", 00:06:59.278 "bdev_aio_delete", 00:06:59.278 "bdev_aio_rescan", 00:06:59.278 "bdev_aio_create", 00:06:59.278 "bdev_ftl_set_property", 00:06:59.278 "bdev_ftl_get_properties", 00:06:59.278 "bdev_ftl_get_stats", 00:06:59.278 "bdev_ftl_unmap", 00:06:59.278 "bdev_ftl_unload", 00:06:59.278 "bdev_ftl_delete", 00:06:59.278 "bdev_ftl_load", 00:06:59.278 "bdev_ftl_create", 00:06:59.278 "bdev_virtio_attach_controller", 00:06:59.278 "bdev_virtio_scsi_get_devices", 00:06:59.278 "bdev_virtio_detach_controller", 00:06:59.278 "bdev_virtio_blk_set_hotplug", 00:06:59.278 "bdev_iscsi_delete", 00:06:59.278 "bdev_iscsi_create", 00:06:59.278 "bdev_iscsi_set_options", 00:06:59.278 "accel_error_inject_error", 00:06:59.278 "ioat_scan_accel_module", 00:06:59.278 "dsa_scan_accel_module", 00:06:59.278 "iaa_scan_accel_module", 00:06:59.278 "keyring_file_remove_key", 00:06:59.278 "keyring_file_add_key", 00:06:59.278 "iscsi_get_histogram", 00:06:59.278 "iscsi_enable_histogram", 00:06:59.278 "iscsi_set_options", 00:06:59.278 "iscsi_get_auth_groups", 00:06:59.278 "iscsi_auth_group_remove_secret", 00:06:59.278 "iscsi_auth_group_add_secret", 00:06:59.278 "iscsi_delete_auth_group", 00:06:59.278 "iscsi_create_auth_group", 00:06:59.278 "iscsi_set_discovery_auth", 00:06:59.278 "iscsi_get_options", 00:06:59.278 "iscsi_target_node_request_logout", 00:06:59.278 "iscsi_target_node_set_redirect", 00:06:59.278 "iscsi_target_node_set_auth", 00:06:59.278 "iscsi_target_node_add_lun", 00:06:59.278 "iscsi_get_stats", 00:06:59.278 "iscsi_get_connections", 00:06:59.278 "iscsi_portal_group_set_auth", 00:06:59.278 "iscsi_start_portal_group", 00:06:59.278 "iscsi_delete_portal_group", 00:06:59.278 "iscsi_create_portal_group", 00:06:59.278 "iscsi_get_portal_groups", 00:06:59.278 "iscsi_delete_target_node", 00:06:59.278 "iscsi_target_node_remove_pg_ig_maps", 00:06:59.278 "iscsi_target_node_add_pg_ig_maps", 00:06:59.278 "iscsi_create_target_node", 00:06:59.278 "iscsi_get_target_nodes", 00:06:59.278 "iscsi_delete_initiator_group", 00:06:59.278 "iscsi_initiator_group_remove_initiators", 00:06:59.278 "iscsi_initiator_group_add_initiators", 00:06:59.278 "iscsi_create_initiator_group", 00:06:59.278 "iscsi_get_initiator_groups", 00:06:59.278 "nvmf_set_crdt", 00:06:59.278 "nvmf_set_config", 00:06:59.278 "nvmf_set_max_subsystems", 00:06:59.278 "nvmf_subsystem_get_listeners", 00:06:59.278 "nvmf_subsystem_get_qpairs", 00:06:59.278 "nvmf_subsystem_get_controllers", 00:06:59.278 "nvmf_get_stats", 00:06:59.278 "nvmf_get_transports", 00:06:59.278 "nvmf_create_transport", 00:06:59.278 "nvmf_get_targets", 00:06:59.278 "nvmf_delete_target", 00:06:59.278 "nvmf_create_target", 00:06:59.278 "nvmf_subsystem_allow_any_host", 00:06:59.278 "nvmf_subsystem_remove_host", 00:06:59.278 "nvmf_subsystem_add_host", 00:06:59.278 "nvmf_ns_remove_host", 00:06:59.278 "nvmf_ns_add_host", 00:06:59.278 "nvmf_subsystem_remove_ns", 00:06:59.278 "nvmf_subsystem_add_ns", 00:06:59.278 "nvmf_subsystem_listener_set_ana_state", 00:06:59.278 "nvmf_discovery_get_referrals", 00:06:59.278 "nvmf_discovery_remove_referral", 00:06:59.278 "nvmf_discovery_add_referral", 00:06:59.278 "nvmf_subsystem_remove_listener", 00:06:59.278 "nvmf_subsystem_add_listener", 00:06:59.278 "nvmf_delete_subsystem", 00:06:59.278 "nvmf_create_subsystem", 00:06:59.278 "nvmf_get_subsystems", 00:06:59.278 "env_dpdk_get_mem_stats", 00:06:59.278 "nbd_get_disks", 00:06:59.278 "nbd_stop_disk", 00:06:59.278 "nbd_start_disk", 00:06:59.278 "ublk_recover_disk", 00:06:59.278 "ublk_get_disks", 00:06:59.278 "ublk_stop_disk", 00:06:59.278 "ublk_start_disk", 00:06:59.278 "ublk_destroy_target", 00:06:59.278 "ublk_create_target", 00:06:59.278 "virtio_blk_create_transport", 00:06:59.278 "virtio_blk_get_transports", 00:06:59.278 "vhost_controller_set_coalescing", 00:06:59.278 "vhost_get_controllers", 00:06:59.278 "vhost_delete_controller", 00:06:59.278 "vhost_create_blk_controller", 00:06:59.278 "vhost_scsi_controller_remove_target", 00:06:59.278 "vhost_scsi_controller_add_target", 00:06:59.278 "vhost_start_scsi_controller", 00:06:59.278 "vhost_create_scsi_controller", 00:06:59.278 "thread_set_cpumask", 00:06:59.278 "framework_get_scheduler", 00:06:59.278 "framework_set_scheduler", 00:06:59.278 "framework_get_reactors", 00:06:59.278 "thread_get_io_channels", 00:06:59.278 "thread_get_pollers", 00:06:59.278 "thread_get_stats", 00:06:59.278 "framework_monitor_context_switch", 00:06:59.278 "spdk_kill_instance", 00:06:59.278 "log_enable_timestamps", 00:06:59.278 "log_get_flags", 00:06:59.278 "log_clear_flag", 00:06:59.278 "log_set_flag", 00:06:59.278 "log_get_level", 00:06:59.278 "log_set_level", 00:06:59.278 "log_get_print_level", 00:06:59.278 "log_set_print_level", 00:06:59.278 "framework_enable_cpumask_locks", 00:06:59.278 "framework_disable_cpumask_locks", 00:06:59.278 "framework_wait_init", 00:06:59.278 "framework_start_init", 00:06:59.278 "scsi_get_devices", 00:06:59.278 "bdev_get_histogram", 00:06:59.278 "bdev_enable_histogram", 00:06:59.278 "bdev_set_qos_limit", 00:06:59.278 "bdev_set_qd_sampling_period", 00:06:59.278 "bdev_get_bdevs", 00:06:59.278 "bdev_reset_iostat", 00:06:59.278 "bdev_get_iostat", 00:06:59.278 "bdev_examine", 00:06:59.278 "bdev_wait_for_examine", 00:06:59.278 "bdev_set_options", 00:06:59.278 "notify_get_notifications", 00:06:59.278 "notify_get_types", 00:06:59.278 "accel_get_stats", 00:06:59.278 "accel_set_options", 00:06:59.278 "accel_set_driver", 00:06:59.278 "accel_crypto_key_destroy", 00:06:59.278 "accel_crypto_keys_get", 00:06:59.278 "accel_crypto_key_create", 00:06:59.278 "accel_assign_opc", 00:06:59.278 "accel_get_module_info", 00:06:59.278 "accel_get_opc_assignments", 00:06:59.278 "vmd_rescan", 00:06:59.278 "vmd_remove_device", 00:06:59.278 "vmd_enable", 00:06:59.278 "sock_get_default_impl", 00:06:59.278 "sock_set_default_impl", 00:06:59.278 "sock_impl_set_options", 00:06:59.278 "sock_impl_get_options", 00:06:59.278 "iobuf_get_stats", 00:06:59.278 "iobuf_set_options", 00:06:59.278 "framework_get_pci_devices", 00:06:59.278 "framework_get_config", 00:06:59.278 "framework_get_subsystems", 00:06:59.278 "trace_get_info", 00:06:59.278 "trace_get_tpoint_group_mask", 00:06:59.278 "trace_disable_tpoint_group", 00:06:59.278 "trace_enable_tpoint_group", 00:06:59.278 "trace_clear_tpoint_mask", 00:06:59.278 "trace_set_tpoint_mask", 00:06:59.278 "keyring_get_keys", 00:06:59.278 "spdk_get_version", 00:06:59.278 "rpc_get_methods" 00:06:59.278 ] 00:06:59.278 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.278 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:59.278 00:43:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3282927 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3282927 ']' 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3282927 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3282927 00:06:59.278 00:43:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.279 00:43:46 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.279 00:43:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3282927' 00:06:59.279 killing process with pid 3282927 00:06:59.279 00:43:46 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3282927 00:06:59.279 00:43:46 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3282927 00:07:00.217 00:07:00.217 real 0m1.995s 00:07:00.217 user 0m3.352s 00:07:00.217 sys 0m0.529s 00:07:00.217 00:43:47 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.217 00:43:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.217 ************************************ 00:07:00.217 END TEST spdkcli_tcp 00:07:00.217 ************************************ 00:07:00.217 00:43:47 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.217 00:43:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.217 00:43:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.217 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:07:00.217 ************************************ 00:07:00.217 START TEST dpdk_mem_utility 00:07:00.217 ************************************ 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.217 * Looking for test storage... 00:07:00.217 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:07:00.217 00:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:00.217 00:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3283486 00:07:00.217 00:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3283486 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3283486 ']' 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.217 00:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.217 00:43:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.478 [2024-05-15 00:43:47.350653] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:00.478 [2024-05-15 00:43:47.350797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283486 ] 00:07:00.478 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.478 [2024-05-15 00:43:47.482567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.738 [2024-05-15 00:43:47.575907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.308 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.308 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:01.308 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:01.308 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:01.308 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.308 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.308 { 00:07:01.308 "filename": "/tmp/spdk_mem_dump.txt" 00:07:01.308 } 00:07:01.308 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.308 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:01.308 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:01.308 1 heaps totaling size 820.000000 MiB 00:07:01.308 size: 820.000000 MiB heap id: 0 00:07:01.308 end heaps---------- 00:07:01.308 8 mempools totaling size 598.116089 MiB 00:07:01.308 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:01.308 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:01.308 size: 84.521057 MiB name: bdev_io_3283486 00:07:01.308 size: 51.011292 MiB name: evtpool_3283486 00:07:01.308 size: 50.003479 MiB name: msgpool_3283486 00:07:01.308 size: 21.763794 MiB name: PDU_Pool 00:07:01.308 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:01.308 size: 0.026123 MiB name: Session_Pool 00:07:01.308 end mempools------- 00:07:01.308 6 memzones totaling size 4.142822 MiB 00:07:01.308 size: 1.000366 MiB name: RG_ring_0_3283486 00:07:01.308 size: 1.000366 MiB name: RG_ring_1_3283486 00:07:01.308 size: 1.000366 MiB name: RG_ring_4_3283486 00:07:01.308 size: 1.000366 MiB name: RG_ring_5_3283486 00:07:01.308 size: 0.125366 MiB name: RG_ring_2_3283486 00:07:01.308 size: 0.015991 MiB name: RG_ring_3_3283486 00:07:01.308 end memzones------- 00:07:01.308 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:01.308 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:07:01.308 list of free elements. size: 18.514832 MiB 00:07:01.308 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:01.308 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:01.308 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:01.308 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:01.308 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:01.309 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:01.309 element at address: 0x200019600000 with size: 0.999329 MiB 00:07:01.309 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:01.309 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:01.309 element at address: 0x200018e00000 with size: 0.959900 MiB 00:07:01.309 element at address: 0x200019900040 with size: 0.937256 MiB 00:07:01.309 element at address: 0x200000200000 with size: 0.840942 MiB 00:07:01.309 element at address: 0x20001b000000 with size: 0.583191 MiB 00:07:01.309 element at address: 0x200019200000 with size: 0.491150 MiB 00:07:01.309 element at address: 0x200019a00000 with size: 0.485657 MiB 00:07:01.309 element at address: 0x200013800000 with size: 0.470581 MiB 00:07:01.309 element at address: 0x200028400000 with size: 0.411072 MiB 00:07:01.309 element at address: 0x200003a00000 with size: 0.356140 MiB 00:07:01.309 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:07:01.309 list of standard malloc elements. size: 199.220764 MiB 00:07:01.309 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:01.309 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:01.309 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:01.309 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:01.309 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:01.309 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:01.309 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:01.309 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:01.309 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:07:01.309 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:07:01.309 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:01.309 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:01.309 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:01.309 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:01.309 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:01.309 list of memzone associated elements. size: 602.264404 MiB 00:07:01.309 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:01.309 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:01.309 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:01.309 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:01.309 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:01.309 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3283486_0 00:07:01.309 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:01.309 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3283486_0 00:07:01.309 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:01.309 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3283486_0 00:07:01.309 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:01.309 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:01.309 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:01.309 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:01.309 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:01.309 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3283486 00:07:01.309 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:01.309 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3283486 00:07:01.309 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:01.309 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3283486 00:07:01.309 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:01.309 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:01.309 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:01.309 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:01.309 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:01.309 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:01.309 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:01.309 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:01.309 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:01.309 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3283486 00:07:01.309 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:01.309 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3283486 00:07:01.309 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:01.309 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3283486 00:07:01.309 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:01.309 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3283486 00:07:01.309 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:01.309 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3283486 00:07:01.309 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:07:01.309 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:01.309 element at address: 0x200013878780 with size: 0.500549 MiB 00:07:01.309 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:01.309 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:07:01.309 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:01.309 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:01.309 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3283486 00:07:01.309 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:07:01.309 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:01.309 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:07:01.309 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:01.309 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:01.309 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3283486 00:07:01.309 element at address: 0x20002846f540 with size: 0.002502 MiB 00:07:01.309 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:01.309 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:07:01.309 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3283486 00:07:01.309 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:01.309 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3283486 00:07:01.309 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:07:01.309 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:01.309 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:01.309 00:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3283486 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3283486 ']' 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3283486 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3283486 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3283486' 00:07:01.309 killing process with pid 3283486 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3283486 00:07:01.309 00:43:48 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3283486 00:07:02.251 00:07:02.251 real 0m1.920s 00:07:02.251 user 0m1.925s 00:07:02.251 sys 0m0.468s 00:07:02.251 00:43:49 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.251 00:43:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 ************************************ 00:07:02.251 END TEST dpdk_mem_utility 00:07:02.251 ************************************ 00:07:02.251 00:43:49 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:07:02.251 00:43:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.251 00:43:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.251 00:43:49 -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 ************************************ 00:07:02.251 START TEST event 00:07:02.251 ************************************ 00:07:02.251 00:43:49 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:07:02.251 * Looking for test storage... 00:07:02.251 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:07:02.251 00:43:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:02.251 00:43:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.251 00:43:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.251 00:43:49 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:02.251 00:43:49 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.251 00:43:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 ************************************ 00:07:02.251 START TEST event_perf 00:07:02.251 ************************************ 00:07:02.251 00:43:49 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.511 Running I/O for 1 seconds...[2024-05-15 00:43:49.317162] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:02.511 [2024-05-15 00:43:49.317275] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283850 ] 00:07:02.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.511 [2024-05-15 00:43:49.433765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.511 [2024-05-15 00:43:49.527668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.511 [2024-05-15 00:43:49.527756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.511 [2024-05-15 00:43:49.527857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.511 [2024-05-15 00:43:49.527865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.933 Running I/O for 1 seconds... 00:07:03.933 lcore 0: 145912 00:07:03.933 lcore 1: 145914 00:07:03.933 lcore 2: 145908 00:07:03.933 lcore 3: 145910 00:07:03.933 done. 00:07:03.933 00:07:03.933 real 0m1.391s 00:07:03.933 user 0m4.234s 00:07:03.933 sys 0m0.139s 00:07:03.933 00:43:50 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.933 00:43:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.933 ************************************ 00:07:03.933 END TEST event_perf 00:07:03.933 ************************************ 00:07:03.933 00:43:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:03.933 00:43:50 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:03.933 00:43:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.933 00:43:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.933 ************************************ 00:07:03.933 START TEST event_reactor 00:07:03.933 ************************************ 00:07:03.933 00:43:50 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:03.933 [2024-05-15 00:43:50.766842] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:03.933 [2024-05-15 00:43:50.766949] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284165 ] 00:07:03.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.933 [2024-05-15 00:43:50.880933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.216 [2024-05-15 00:43:50.975314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.156 test_start 00:07:05.156 oneshot 00:07:05.156 tick 100 00:07:05.157 tick 100 00:07:05.157 tick 250 00:07:05.157 tick 100 00:07:05.157 tick 100 00:07:05.157 tick 100 00:07:05.157 tick 250 00:07:05.157 tick 500 00:07:05.157 tick 100 00:07:05.157 tick 100 00:07:05.157 tick 250 00:07:05.157 tick 100 00:07:05.157 tick 100 00:07:05.157 test_end 00:07:05.157 00:07:05.157 real 0m1.388s 00:07:05.157 user 0m1.249s 00:07:05.157 sys 0m0.131s 00:07:05.157 00:43:52 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.157 00:43:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:05.157 ************************************ 00:07:05.157 END TEST event_reactor 00:07:05.157 ************************************ 00:07:05.157 00:43:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:05.157 00:43:52 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:05.157 00:43:52 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.157 00:43:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.157 ************************************ 00:07:05.157 START TEST event_reactor_perf 00:07:05.157 ************************************ 00:07:05.157 00:43:52 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:05.157 [2024-05-15 00:43:52.209839] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:05.157 [2024-05-15 00:43:52.209947] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284483 ] 00:07:05.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.417 [2024-05-15 00:43:52.325298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.417 [2024-05-15 00:43:52.417055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.797 test_start 00:07:06.797 test_end 00:07:06.797 Performance: 423457 events per second 00:07:06.797 00:07:06.797 real 0m1.390s 00:07:06.797 user 0m1.246s 00:07:06.797 sys 0m0.136s 00:07:06.797 00:43:53 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.797 00:43:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 ************************************ 00:07:06.797 END TEST event_reactor_perf 00:07:06.797 ************************************ 00:07:06.797 00:43:53 event -- event/event.sh@49 -- # uname -s 00:07:06.797 00:43:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:06.797 00:43:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:06.797 00:43:53 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.797 00:43:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.797 00:43:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 ************************************ 00:07:06.797 START TEST event_scheduler 00:07:06.797 ************************************ 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:06.797 * Looking for test storage... 00:07:06.797 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:07:06.797 00:43:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:06.797 00:43:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3284830 00:07:06.797 00:43:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.797 00:43:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3284830 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3284830 ']' 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.797 00:43:53 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.798 00:43:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:06.798 00:43:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.798 [2024-05-15 00:43:53.772384] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:06.798 [2024-05-15 00:43:53.772505] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284830 ] 00:07:06.798 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.056 [2024-05-15 00:43:53.889809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.056 [2024-05-15 00:43:53.989020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.056 [2024-05-15 00:43:53.989173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.056 [2024-05-15 00:43:53.989175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.056 [2024-05-15 00:43:53.989187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.625 00:43:54 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.625 00:43:54 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:07.625 00:43:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:07.625 00:43:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.625 00:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.625 POWER: Env isn't set yet! 00:07:07.625 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:07.625 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:07.626 POWER: Cannot set governor of lcore 0 to userspace 00:07:07.626 POWER: Attempting to initialise PSTAT power management... 00:07:07.626 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:07.626 POWER: Initialized successfully for lcore 0 power management 00:07:07.626 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:07.626 POWER: Initialized successfully for lcore 1 power management 00:07:07.626 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:07.626 POWER: Initialized successfully for lcore 2 power management 00:07:07.626 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:07.626 POWER: Initialized successfully for lcore 3 power management 00:07:07.626 00:43:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.626 00:43:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:07.626 00:43:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.626 00:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 [2024-05-15 00:43:54.793416] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:07.886 00:43:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:07.886 00:43:54 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.886 00:43:54 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 ************************************ 00:07:07.886 START TEST scheduler_create_thread 00:07:07.886 ************************************ 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 2 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 3 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 4 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 5 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 6 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 7 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.886 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.886 8 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 9 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 10 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.887 00:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.796 00:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.796 00:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:09.796 00:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:09.796 00:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.796 00:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.735 00:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.735 00:07:10.735 real 0m2.613s 00:07:10.735 user 0m0.018s 00:07:10.735 sys 0m0.004s 00:07:10.735 00:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.735 00:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.735 ************************************ 00:07:10.735 END TEST scheduler_create_thread 00:07:10.735 ************************************ 00:07:10.735 00:43:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.735 00:43:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3284830 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3284830 ']' 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3284830 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3284830 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3284830' 00:07:10.735 killing process with pid 3284830 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3284830 00:07:10.735 00:43:57 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3284830 00:07:10.995 [2024-05-15 00:43:57.922796] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:11.255 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:07:11.255 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:11.255 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:07:11.255 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:11.255 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:07:11.255 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:11.255 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:07:11.255 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:11.516 00:07:11.516 real 0m4.755s 00:07:11.516 user 0m8.521s 00:07:11.516 sys 0m0.449s 00:07:11.516 00:43:58 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.516 00:43:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.516 ************************************ 00:07:11.516 END TEST event_scheduler 00:07:11.516 ************************************ 00:07:11.516 00:43:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:11.516 00:43:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:11.516 00:43:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.516 00:43:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.516 00:43:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.516 ************************************ 00:07:11.516 START TEST app_repeat 00:07:11.516 ************************************ 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3285771 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3285771' 00:07:11.516 Process app_repeat pid: 3285771 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:11.516 spdk_app_start Round 0 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3285771 /var/tmp/spdk-nbd.sock 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3285771 ']' 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.516 00:43:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:11.516 00:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.516 [2024-05-15 00:43:58.516314] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:11.516 [2024-05-15 00:43:58.516448] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285771 ] 00:07:11.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.776 [2024-05-15 00:43:58.650394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.776 [2024-05-15 00:43:58.742965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.776 [2024-05-15 00:43:58.742988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.347 00:43:59 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.347 00:43:59 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:12.347 00:43:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.607 Malloc0 00:07:12.607 00:43:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.607 Malloc1 00:07:12.607 00:43:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.607 00:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.868 /dev/nbd0 00:07:12.868 00:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.868 00:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.868 1+0 records in 00:07:12.868 1+0 records out 00:07:12.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024787 s, 16.5 MB/s 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:12.868 00:43:59 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:12.868 00:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.868 00:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.868 00:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.128 /dev/nbd1 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.128 1+0 records in 00:07:13.128 1+0 records out 00:07:13.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279092 s, 14.7 MB/s 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:13.128 00:44:00 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.128 00:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.388 { 00:07:13.388 "nbd_device": "/dev/nbd0", 00:07:13.388 "bdev_name": "Malloc0" 00:07:13.388 }, 00:07:13.388 { 00:07:13.388 "nbd_device": "/dev/nbd1", 00:07:13.388 "bdev_name": "Malloc1" 00:07:13.388 } 00:07:13.388 ]' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.388 { 00:07:13.388 "nbd_device": "/dev/nbd0", 00:07:13.388 "bdev_name": "Malloc0" 00:07:13.388 }, 00:07:13.388 { 00:07:13.388 "nbd_device": "/dev/nbd1", 00:07:13.388 "bdev_name": "Malloc1" 00:07:13.388 } 00:07:13.388 ]' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.388 /dev/nbd1' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.388 /dev/nbd1' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.388 256+0 records in 00:07:13.388 256+0 records out 00:07:13.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534458 s, 196 MB/s 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.388 256+0 records in 00:07:13.388 256+0 records out 00:07:13.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146096 s, 71.8 MB/s 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.388 256+0 records in 00:07:13.388 256+0 records out 00:07:13.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017498 s, 59.9 MB/s 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.388 00:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.389 00:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.647 00:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.648 00:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.906 00:44:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.906 00:44:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.166 00:44:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.735 [2024-05-15 00:44:01.507094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.735 [2024-05-15 00:44:01.594208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.735 [2024-05-15 00:44:01.594209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.735 [2024-05-15 00:44:01.665577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.735 [2024-05-15 00:44:01.665637] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.272 00:44:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.272 00:44:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.272 spdk_app_start Round 1 00:07:17.272 00:44:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3285771 /var/tmp/spdk-nbd.sock 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3285771 ']' 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.272 00:44:04 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:17.272 00:44:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.532 Malloc0 00:07:17.532 00:44:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.532 Malloc1 00:07:17.532 00:44:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.532 00:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.793 /dev/nbd0 00:07:17.793 00:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.793 00:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.793 1+0 records in 00:07:17.793 1+0 records out 00:07:17.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330592 s, 12.4 MB/s 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:17.793 00:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:17.793 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.793 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.793 00:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.053 /dev/nbd1 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.053 1+0 records in 00:07:18.053 1+0 records out 00:07:18.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259597 s, 15.8 MB/s 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.053 00:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.053 00:44:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.053 { 00:07:18.053 "nbd_device": "/dev/nbd0", 00:07:18.053 "bdev_name": "Malloc0" 00:07:18.053 }, 00:07:18.053 { 00:07:18.053 "nbd_device": "/dev/nbd1", 00:07:18.053 "bdev_name": "Malloc1" 00:07:18.053 } 00:07:18.053 ]' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.053 { 00:07:18.053 "nbd_device": "/dev/nbd0", 00:07:18.053 "bdev_name": "Malloc0" 00:07:18.053 }, 00:07:18.053 { 00:07:18.053 "nbd_device": "/dev/nbd1", 00:07:18.053 "bdev_name": "Malloc1" 00:07:18.053 } 00:07:18.053 ]' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.053 /dev/nbd1' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.053 /dev/nbd1' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.053 256+0 records in 00:07:18.053 256+0 records out 00:07:18.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463985 s, 226 MB/s 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.053 00:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.313 256+0 records in 00:07:18.313 256+0 records out 00:07:18.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153001 s, 68.5 MB/s 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.313 256+0 records in 00:07:18.313 256+0 records out 00:07:18.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01487 s, 70.5 MB/s 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.313 00:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.573 00:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.832 00:44:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.832 00:44:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:18.832 00:44:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.402 [2024-05-15 00:44:06.371635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.402 [2024-05-15 00:44:06.460986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.402 [2024-05-15 00:44:06.461003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.662 [2024-05-15 00:44:06.533227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.662 [2024-05-15 00:44:06.533286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.201 00:44:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.201 00:44:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.201 spdk_app_start Round 2 00:07:22.201 00:44:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3285771 /var/tmp/spdk-nbd.sock 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3285771 ']' 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.201 00:44:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.201 00:44:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.201 00:44:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:22.202 00:44:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.202 Malloc0 00:07:22.202 00:44:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.462 Malloc1 00:07:22.462 00:44:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.462 00:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.722 /dev/nbd0 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.722 1+0 records in 00:07:22.722 1+0 records out 00:07:22.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200721 s, 20.4 MB/s 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.722 /dev/nbd1 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.722 1+0 records in 00:07:22.722 1+0 records out 00:07:22.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249517 s, 16.4 MB/s 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:22.722 00:44:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.722 00:44:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.982 { 00:07:22.982 "nbd_device": "/dev/nbd0", 00:07:22.982 "bdev_name": "Malloc0" 00:07:22.982 }, 00:07:22.982 { 00:07:22.982 "nbd_device": "/dev/nbd1", 00:07:22.982 "bdev_name": "Malloc1" 00:07:22.982 } 00:07:22.982 ]' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.982 { 00:07:22.982 "nbd_device": "/dev/nbd0", 00:07:22.982 "bdev_name": "Malloc0" 00:07:22.982 }, 00:07:22.982 { 00:07:22.982 "nbd_device": "/dev/nbd1", 00:07:22.982 "bdev_name": "Malloc1" 00:07:22.982 } 00:07:22.982 ]' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.982 /dev/nbd1' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.982 /dev/nbd1' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.982 256+0 records in 00:07:22.982 256+0 records out 00:07:22.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450868 s, 233 MB/s 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.982 256+0 records in 00:07:22.982 256+0 records out 00:07:22.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147487 s, 71.1 MB/s 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.982 256+0 records in 00:07:22.982 256+0 records out 00:07:22.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161917 s, 64.8 MB/s 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.982 00:44:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.982 00:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.248 00:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.569 00:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.570 00:44:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.570 00:44:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:23.830 00:44:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.398 [2024-05-15 00:44:11.213281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.398 [2024-05-15 00:44:11.302253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.398 [2024-05-15 00:44:11.302254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.398 [2024-05-15 00:44:11.376692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.398 [2024-05-15 00:44:11.376736] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.938 00:44:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3285771 /var/tmp/spdk-nbd.sock 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3285771 ']' 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:26.938 00:44:13 event.app_repeat -- event/event.sh@39 -- # killprocess 3285771 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3285771 ']' 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3285771 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3285771 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3285771' 00:07:26.938 killing process with pid 3285771 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3285771 00:07:26.938 00:44:13 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3285771 00:07:27.507 spdk_app_start is called in Round 0. 00:07:27.507 Shutdown signal received, stop current app iteration 00:07:27.507 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:27.507 spdk_app_start is called in Round 1. 00:07:27.507 Shutdown signal received, stop current app iteration 00:07:27.507 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:27.507 spdk_app_start is called in Round 2. 00:07:27.507 Shutdown signal received, stop current app iteration 00:07:27.507 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:27.507 spdk_app_start is called in Round 3. 00:07:27.507 Shutdown signal received, stop current app iteration 00:07:27.507 00:44:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:27.507 00:44:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:27.507 00:07:27.507 real 0m15.905s 00:07:27.507 user 0m33.404s 00:07:27.507 sys 0m2.176s 00:07:27.507 00:44:14 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.507 00:44:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 ************************************ 00:07:27.507 END TEST app_repeat 00:07:27.507 ************************************ 00:07:27.507 00:44:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:27.507 00:44:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:27.507 00:44:14 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.507 00:44:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.507 00:44:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 ************************************ 00:07:27.507 START TEST cpu_locks 00:07:27.507 ************************************ 00:07:27.507 00:44:14 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:27.507 * Looking for test storage... 00:07:27.507 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:07:27.507 00:44:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:27.507 00:44:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:27.507 00:44:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:27.507 00:44:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:27.507 00:44:14 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.507 00:44:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.507 00:44:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 ************************************ 00:07:27.507 START TEST default_locks 00:07:27.507 ************************************ 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3289022 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3289022 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3289022 ']' 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.507 00:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 [2024-05-15 00:44:14.627401] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:27.766 [2024-05-15 00:44:14.627520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289022 ] 00:07:27.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.766 [2024-05-15 00:44:14.744053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.027 [2024-05-15 00:44:14.836664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.287 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.287 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:28.287 00:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3289022 00:07:28.287 00:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3289022 00:07:28.287 00:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.547 lslocks: write error 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3289022 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3289022 ']' 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3289022 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3289022 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3289022' 00:07:28.547 killing process with pid 3289022 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3289022 00:07:28.547 00:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3289022 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3289022 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3289022 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3289022 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3289022 ']' 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.503 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3289022) - No such process 00:07:29.503 ERROR: process (pid: 3289022) is no longer running 00:07:29.503 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:29.504 00:07:29.504 real 0m1.865s 00:07:29.504 user 0m1.803s 00:07:29.504 sys 0m0.487s 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.504 00:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.504 ************************************ 00:07:29.504 END TEST default_locks 00:07:29.504 ************************************ 00:07:29.504 00:44:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:29.504 00:44:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.504 00:44:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.504 00:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.504 ************************************ 00:07:29.504 START TEST default_locks_via_rpc 00:07:29.504 ************************************ 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3289575 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3289575 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3289575 ']' 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.504 00:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.504 [2024-05-15 00:44:16.551177] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:29.504 [2024-05-15 00:44:16.551290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289575 ] 00:07:29.764 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.764 [2024-05-15 00:44:16.666706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.764 [2024-05-15 00:44:16.761624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3289575 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3289575 00:07:30.333 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3289575 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3289575 ']' 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3289575 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3289575 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3289575' 00:07:30.592 killing process with pid 3289575 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3289575 00:07:30.592 00:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3289575 00:07:31.531 00:07:31.531 real 0m1.811s 00:07:31.531 user 0m1.763s 00:07:31.531 sys 0m0.471s 00:07:31.531 00:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.531 00:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 ************************************ 00:07:31.531 END TEST default_locks_via_rpc 00:07:31.531 ************************************ 00:07:31.532 00:44:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:31.532 00:44:18 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.532 00:44:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.532 00:44:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.532 ************************************ 00:07:31.532 START TEST non_locking_app_on_locked_coremask 00:07:31.532 ************************************ 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3289911 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3289911 /var/tmp/spdk.sock 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3289911 ']' 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.532 00:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:31.532 [2024-05-15 00:44:18.430228] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:31.532 [2024-05-15 00:44:18.430340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289911 ] 00:07:31.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.532 [2024-05-15 00:44:18.523516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.792 [2024-05-15 00:44:18.615884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3289963 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3289963 /var/tmp/spdk2.sock 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3289963 ']' 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.360 00:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.360 [2024-05-15 00:44:19.203151] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:32.360 [2024-05-15 00:44:19.203274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289963 ] 00:07:32.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.360 [2024-05-15 00:44:19.359871] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.360 [2024-05-15 00:44:19.359917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.619 [2024-05-15 00:44:19.550352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.557 lslocks: write error 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3289911 ']' 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3289911' 00:07:33.557 killing process with pid 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3289911 00:07:33.557 00:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3289911 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3289963 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3289963 ']' 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3289963 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3289963 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3289963' 00:07:35.467 killing process with pid 3289963 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3289963 00:07:35.467 00:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3289963 00:07:36.407 00:07:36.407 real 0m4.874s 00:07:36.407 user 0m4.933s 00:07:36.407 sys 0m0.982s 00:07:36.407 00:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.407 00:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.407 ************************************ 00:07:36.407 END TEST non_locking_app_on_locked_coremask 00:07:36.407 ************************************ 00:07:36.407 00:44:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:36.407 00:44:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.407 00:44:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.407 00:44:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.407 ************************************ 00:07:36.407 START TEST locking_app_on_unlocked_coremask 00:07:36.407 ************************************ 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3290840 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3290840 /var/tmp/spdk.sock 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3290840 ']' 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.407 00:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:36.407 [2024-05-15 00:44:23.392414] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:36.407 [2024-05-15 00:44:23.392551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290840 ] 00:07:36.667 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.667 [2024-05-15 00:44:23.521778] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.667 [2024-05-15 00:44:23.521832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.667 [2024-05-15 00:44:23.614017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3291090 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3291090 /var/tmp/spdk2.sock 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3291090 ']' 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.236 00:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.236 [2024-05-15 00:44:24.198716] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:37.236 [2024-05-15 00:44:24.198853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291090 ] 00:07:37.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.494 [2024-05-15 00:44:24.369048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.495 [2024-05-15 00:44:24.554272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.431 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.431 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:38.431 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3291090 00:07:38.431 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3291090 00:07:38.431 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.691 lslocks: write error 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3290840 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3290840 ']' 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3290840 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.691 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3290840 00:07:38.692 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.692 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.692 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3290840' 00:07:38.692 killing process with pid 3290840 00:07:38.692 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3290840 00:07:38.692 00:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3290840 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3291090 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3291090 ']' 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3291090 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3291090 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3291090' 00:07:40.597 killing process with pid 3291090 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3291090 00:07:40.597 00:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3291090 00:07:41.167 00:07:41.167 real 0m4.937s 00:07:41.167 user 0m4.966s 00:07:41.167 sys 0m1.075s 00:07:41.167 00:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.427 ************************************ 00:07:41.427 END TEST locking_app_on_unlocked_coremask 00:07:41.427 ************************************ 00:07:41.427 00:44:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:41.427 00:44:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.427 00:44:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.427 00:44:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.427 ************************************ 00:07:41.427 START TEST locking_app_on_locked_coremask 00:07:41.427 ************************************ 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3291782 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3291782 /var/tmp/spdk.sock 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3291782 ']' 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.427 00:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.427 [2024-05-15 00:44:28.398776] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:41.427 [2024-05-15 00:44:28.398909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291782 ] 00:07:41.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.687 [2024-05-15 00:44:28.535907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.687 [2024-05-15 00:44:28.638424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3292071 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3292071 /var/tmp/spdk2.sock 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3292071 /var/tmp/spdk2.sock 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3292071 /var/tmp/spdk2.sock 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3292071 ']' 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.257 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.257 [2024-05-15 00:44:29.194173] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:42.257 [2024-05-15 00:44:29.194265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292071 ] 00:07:42.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.555 [2024-05-15 00:44:29.324262] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3291782 has claimed it. 00:07:42.555 [2024-05-15 00:44:29.324316] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.838 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3292071) - No such process 00:07:42.838 ERROR: process (pid: 3292071) is no longer running 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3291782 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3291782 00:07:42.838 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.096 lslocks: write error 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3291782 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3291782 ']' 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3291782 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3291782 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3291782' 00:07:43.096 killing process with pid 3291782 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3291782 00:07:43.096 00:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3291782 00:07:44.036 00:07:44.036 real 0m2.558s 00:07:44.036 user 0m2.624s 00:07:44.036 sys 0m0.666s 00:07:44.036 00:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.036 00:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.036 ************************************ 00:07:44.036 END TEST locking_app_on_locked_coremask 00:07:44.036 ************************************ 00:07:44.036 00:44:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:44.036 00:44:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:44.036 00:44:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.036 00:44:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.036 ************************************ 00:07:44.036 START TEST locking_overlapped_coremask 00:07:44.036 ************************************ 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3292403 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3292403 /var/tmp/spdk.sock 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3292403 ']' 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.036 00:44:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:44.036 [2024-05-15 00:44:31.032506] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:44.036 [2024-05-15 00:44:31.032640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292403 ] 00:07:44.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.295 [2024-05-15 00:44:31.163984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.295 [2024-05-15 00:44:31.259024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.295 [2024-05-15 00:44:31.259103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.295 [2024-05-15 00:44:31.259108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3292670 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3292670 /var/tmp/spdk2.sock 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3292670 /var/tmp/spdk2.sock 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3292670 /var/tmp/spdk2.sock 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3292670 ']' 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.863 00:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.863 [2024-05-15 00:44:31.838352] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:44.863 [2024-05-15 00:44:31.838495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292670 ] 00:07:44.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.122 [2024-05-15 00:44:32.005778] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3292403 has claimed it. 00:07:45.122 [2024-05-15 00:44:32.005836] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:45.382 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3292670) - No such process 00:07:45.382 ERROR: process (pid: 3292670) is no longer running 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3292403 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3292403 ']' 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3292403 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.382 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3292403 00:07:45.642 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.642 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.642 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3292403' 00:07:45.642 killing process with pid 3292403 00:07:45.642 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3292403 00:07:45.642 00:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3292403 00:07:46.581 00:07:46.581 real 0m2.361s 00:07:46.581 user 0m6.056s 00:07:46.581 sys 0m0.624s 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.581 ************************************ 00:07:46.581 END TEST locking_overlapped_coremask 00:07:46.581 ************************************ 00:07:46.581 00:44:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:46.581 00:44:33 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.581 00:44:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.581 00:44:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.581 ************************************ 00:07:46.581 START TEST locking_overlapped_coremask_via_rpc 00:07:46.581 ************************************ 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3293021 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3293021 /var/tmp/spdk.sock 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3293021 ']' 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.581 00:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.581 [2024-05-15 00:44:33.446155] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:46.581 [2024-05-15 00:44:33.446270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293021 ] 00:07:46.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.581 [2024-05-15 00:44:33.563340] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.581 [2024-05-15 00:44:33.563374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.843 [2024-05-15 00:44:33.657505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.843 [2024-05-15 00:44:33.657504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.843 [2024-05-15 00:44:33.657511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3293043 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3293043 /var/tmp/spdk2.sock 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3293043 ']' 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.103 00:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:47.364 [2024-05-15 00:44:34.253877] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:47.364 [2024-05-15 00:44:34.254008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293043 ] 00:07:47.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.624 [2024-05-15 00:44:34.427656] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.624 [2024-05-15 00:44:34.427702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.624 [2024-05-15 00:44:34.621669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.624 [2024-05-15 00:44:34.621796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.624 [2024-05-15 00:44:34.621829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.567 [2024-05-15 00:44:35.317169] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3293021 has claimed it. 00:07:48.567 request: 00:07:48.567 { 00:07:48.567 "method": "framework_enable_cpumask_locks", 00:07:48.567 "req_id": 1 00:07:48.567 } 00:07:48.567 Got JSON-RPC error response 00:07:48.567 response: 00:07:48.567 { 00:07:48.567 "code": -32603, 00:07:48.567 "message": "Failed to claim CPU core: 2" 00:07:48.567 } 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3293021 /var/tmp/spdk.sock 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3293021 ']' 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3293043 /var/tmp/spdk2.sock 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3293043 ']' 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.567 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.828 00:07:48.828 real 0m2.293s 00:07:48.828 user 0m0.724s 00:07:48.828 sys 0m0.140s 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.828 00:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.828 ************************************ 00:07:48.828 END TEST locking_overlapped_coremask_via_rpc 00:07:48.828 ************************************ 00:07:48.828 00:44:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:48.828 00:44:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3293021 ]] 00:07:48.828 00:44:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3293021 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3293021 ']' 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3293021 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3293021 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3293021' 00:07:48.828 killing process with pid 3293021 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3293021 00:07:48.828 00:44:35 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3293021 00:07:49.769 00:44:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3293043 ]] 00:07:49.769 00:44:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3293043 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3293043 ']' 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3293043 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3293043 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3293043' 00:07:49.769 killing process with pid 3293043 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3293043 00:07:49.769 00:44:36 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3293043 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3293021 ]] 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3293021 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3293021 ']' 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3293021 00:07:50.711 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3293021) - No such process 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3293021 is not found' 00:07:50.711 Process with pid 3293021 is not found 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3293043 ]] 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3293043 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3293043 ']' 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3293043 00:07:50.711 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3293043) - No such process 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3293043 is not found' 00:07:50.711 Process with pid 3293043 is not found 00:07:50.711 00:44:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.711 00:07:50.711 real 0m23.058s 00:07:50.711 user 0m37.724s 00:07:50.711 sys 0m5.528s 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.711 00:44:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 END TEST cpu_locks 00:07:50.711 ************************************ 00:07:50.711 00:07:50.711 real 0m48.346s 00:07:50.711 user 1m26.524s 00:07:50.711 sys 0m8.886s 00:07:50.711 00:44:37 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.711 00:44:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 END TEST event 00:07:50.711 ************************************ 00:07:50.711 00:44:37 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:50.711 00:44:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.711 00:44:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.711 00:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 START TEST thread 00:07:50.711 ************************************ 00:07:50.711 00:44:37 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:50.711 * Looking for test storage... 00:07:50.711 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:07:50.711 00:44:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.711 00:44:37 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:50.711 00:44:37 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.711 00:44:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 START TEST thread_poller_perf 00:07:50.711 ************************************ 00:07:50.711 00:44:37 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.711 [2024-05-15 00:44:37.760453] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:50.711 [2024-05-15 00:44:37.760590] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293884 ] 00:07:50.972 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.972 [2024-05-15 00:44:37.899460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.972 [2024-05-15 00:44:37.991131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.972 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:52.355 ====================================== 00:07:52.355 busy:1908243872 (cyc) 00:07:52.355 total_run_count: 402000 00:07:52.355 tsc_hz: 1900000000 (cyc) 00:07:52.355 ====================================== 00:07:52.355 poller_cost: 4746 (cyc), 2497 (nsec) 00:07:52.355 00:07:52.355 real 0m1.432s 00:07:52.355 user 0m1.253s 00:07:52.355 sys 0m0.172s 00:07:52.355 00:44:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.355 00:44:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.355 ************************************ 00:07:52.355 END TEST thread_poller_perf 00:07:52.355 ************************************ 00:07:52.355 00:44:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.355 00:44:39 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:52.355 00:44:39 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.355 00:44:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.355 ************************************ 00:07:52.355 START TEST thread_poller_perf 00:07:52.355 ************************************ 00:07:52.355 00:44:39 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.355 [2024-05-15 00:44:39.261711] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:52.355 [2024-05-15 00:44:39.261846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294220 ] 00:07:52.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.355 [2024-05-15 00:44:39.395453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.615 [2024-05-15 00:44:39.486120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.615 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:53.998 ====================================== 00:07:53.998 busy:1901971116 (cyc) 00:07:53.998 total_run_count: 5307000 00:07:53.998 tsc_hz: 1900000000 (cyc) 00:07:53.998 ====================================== 00:07:53.998 poller_cost: 358 (cyc), 188 (nsec) 00:07:53.998 00:07:53.998 real 0m1.421s 00:07:53.998 user 0m1.263s 00:07:53.998 sys 0m0.151s 00:07:53.998 00:44:40 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.998 00:44:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:53.998 ************************************ 00:07:53.998 END TEST thread_poller_perf 00:07:53.998 ************************************ 00:07:53.998 00:44:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:53.998 00:07:53.998 real 0m3.076s 00:07:53.998 user 0m2.589s 00:07:53.998 sys 0m0.484s 00:07:53.998 00:44:40 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.998 00:44:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.998 ************************************ 00:07:53.998 END TEST thread 00:07:53.998 ************************************ 00:07:53.998 00:44:40 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:53.998 00:44:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:53.998 00:44:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.998 00:44:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.998 ************************************ 00:07:53.998 START TEST accel 00:07:53.998 ************************************ 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:53.998 * Looking for test storage... 00:07:53.998 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:07:53.998 00:44:40 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:53.998 00:44:40 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:53.998 00:44:40 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:53.998 00:44:40 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3294665 00:07:53.998 00:44:40 accel -- accel/accel.sh@63 -- # waitforlisten 3294665 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@827 -- # '[' -z 3294665 ']' 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.998 00:44:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.998 00:44:40 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:53.998 00:44:40 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:53.998 00:44:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.998 00:44:40 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:53.998 00:44:40 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:53.998 00:44:40 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:53.998 00:44:40 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:53.998 00:44:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.998 00:44:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.998 00:44:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:53.998 00:44:40 accel -- accel/accel.sh@41 -- # jq -r . 00:07:53.998 [2024-05-15 00:44:40.931933] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:53.998 [2024-05-15 00:44:40.932077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294665 ] 00:07:53.998 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.259 [2024-05-15 00:44:41.062584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.259 [2024-05-15 00:44:41.156841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.259 [2024-05-15 00:44:41.161507] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:54.259 [2024-05-15 00:44:41.169439] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@860 -- # return 0 00:08:02.401 00:44:48 accel -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:08:02.401 00:44:48 accel -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.401 "method": "dsa_scan_accel_module", 00:08:02.401 00:44:48 accel -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:08:02.401 00:44:48 accel -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.401 00:44:48 accel -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.401 "method": "iaa_scan_accel_module" 00:08:02.401 00:44:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:02.401 00:44:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:02.401 00:44:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:02.401 00:44:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:02.401 00:44:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.401 00:44:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.401 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.401 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.401 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.401 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.401 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.401 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.401 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.401 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.401 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.401 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.402 00:44:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.402 00:44:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.402 00:44:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:08:02.402 00:44:48 accel -- accel/accel.sh@75 -- # killprocess 3294665 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@946 -- # '[' -z 3294665 ']' 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@950 -- # kill -0 3294665 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@951 -- # uname 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3294665 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3294665' 00:08:02.402 killing process with pid 3294665 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@965 -- # kill 3294665 00:08:02.402 00:44:48 accel -- common/autotest_common.sh@970 -- # wait 3294665 00:08:04.946 00:44:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:04.946 00:44:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.946 00:44:51 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:04.946 00:44:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:04.946 00:44:51 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.946 00:44:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:04.946 00:44:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.946 00:44:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.946 ************************************ 00:08:04.946 START TEST accel_missing_filename 00:08:04.946 ************************************ 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.946 00:44:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:04.946 00:44:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.947 00:44:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.947 00:44:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:04.947 00:44:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:04.947 [2024-05-15 00:44:51.697069] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:04.947 [2024-05-15 00:44:51.697183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296806 ] 00:08:04.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.947 [2024-05-15 00:44:51.814402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.947 [2024-05-15 00:44:51.915391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.947 [2024-05-15 00:44:51.919887] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:04.947 [2024-05-15 00:44:51.927845] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:11.596 [2024-05-15 00:44:58.317751] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.502 [2024-05-15 00:45:00.187934] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:13.502 A filename is required. 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.502 00:08:13.502 real 0m8.691s 00:08:13.502 user 0m2.326s 00:08:13.502 sys 0m0.221s 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.502 00:45:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:13.502 ************************************ 00:08:13.502 END TEST accel_missing_filename 00:08:13.502 ************************************ 00:08:13.502 00:45:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:13.502 00:45:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:13.502 00:45:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.502 00:45:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.502 ************************************ 00:08:13.502 START TEST accel_compress_verify 00:08:13.502 ************************************ 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.502 00:45:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:13.502 00:45:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:13.502 [2024-05-15 00:45:00.447561] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:13.502 [2024-05-15 00:45:00.447672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298367 ] 00:08:13.502 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.761 [2024-05-15 00:45:00.570318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.761 [2024-05-15 00:45:00.683059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.761 [2024-05-15 00:45:00.687645] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:13.761 [2024-05-15 00:45:00.695600] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:20.340 [2024-05-15 00:45:07.084593] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.251 [2024-05-15 00:45:08.930571] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:22.251 00:08:22.251 Compression does not support the verify option, aborting. 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.251 00:08:22.251 real 0m8.682s 00:08:22.251 user 0m2.301s 00:08:22.251 sys 0m0.250s 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.251 00:45:09 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.251 ************************************ 00:08:22.251 END TEST accel_compress_verify 00:08:22.251 ************************************ 00:08:22.251 00:45:09 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:22.251 00:45:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:22.251 00:45:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.251 00:45:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.251 ************************************ 00:08:22.251 START TEST accel_wrong_workload 00:08:22.251 ************************************ 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:22.251 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:22.252 00:45:09 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:22.252 Unsupported workload type: foobar 00:08:22.252 [2024-05-15 00:45:09.186687] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:22.252 accel_perf options: 00:08:22.252 [-h help message] 00:08:22.252 [-q queue depth per core] 00:08:22.252 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:22.252 [-T number of threads per core 00:08:22.252 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:22.252 [-t time in seconds] 00:08:22.252 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:22.252 [ dif_verify, , dif_generate, dif_generate_copy 00:08:22.252 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:22.252 [-l for compress/decompress workloads, name of uncompressed input file 00:08:22.252 [-S for crc32c workload, use this seed value (default 0) 00:08:22.252 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:22.252 [-f for fill workload, use this BYTE value (default 255) 00:08:22.252 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:22.252 [-y verify result if this switch is on] 00:08:22.252 [-a tasks to allocate per core (default: same value as -q)] 00:08:22.252 Can be used to spread operations across a wider range of memory. 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.252 00:08:22.252 real 0m0.054s 00:08:22.252 user 0m0.054s 00:08:22.252 sys 0m0.032s 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.252 00:45:09 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:22.252 ************************************ 00:08:22.252 END TEST accel_wrong_workload 00:08:22.252 ************************************ 00:08:22.252 00:45:09 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:22.252 00:45:09 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:22.252 00:45:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.252 00:45:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.252 ************************************ 00:08:22.252 START TEST accel_negative_buffers 00:08:22.252 ************************************ 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.252 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:22.252 00:45:09 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:22.252 -x option must be non-negative. 00:08:22.252 [2024-05-15 00:45:09.298505] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:22.513 accel_perf options: 00:08:22.513 [-h help message] 00:08:22.513 [-q queue depth per core] 00:08:22.513 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:22.513 [-T number of threads per core 00:08:22.513 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:22.513 [-t time in seconds] 00:08:22.513 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:22.513 [ dif_verify, , dif_generate, dif_generate_copy 00:08:22.513 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:22.513 [-l for compress/decompress workloads, name of uncompressed input file 00:08:22.513 [-S for crc32c workload, use this seed value (default 0) 00:08:22.513 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:22.513 [-f for fill workload, use this BYTE value (default 255) 00:08:22.513 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:22.513 [-y verify result if this switch is on] 00:08:22.513 [-a tasks to allocate per core (default: same value as -q)] 00:08:22.513 Can be used to spread operations across a wider range of memory. 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.513 00:08:22.513 real 0m0.051s 00:08:22.513 user 0m0.054s 00:08:22.513 sys 0m0.028s 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.513 00:45:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:22.513 ************************************ 00:08:22.513 END TEST accel_negative_buffers 00:08:22.513 ************************************ 00:08:22.513 00:45:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:22.513 00:45:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:22.513 00:45:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.513 00:45:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.513 ************************************ 00:08:22.513 START TEST accel_crc32c 00:08:22.513 ************************************ 00:08:22.513 00:45:09 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:22.513 00:45:09 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:22.513 [2024-05-15 00:45:09.411800] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:22.513 [2024-05-15 00:45:09.411906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300714 ] 00:08:22.513 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.513 [2024-05-15 00:45:09.529797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.773 [2024-05-15 00:45:09.632790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.773 [2024-05-15 00:45:09.637297] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:22.773 [2024-05-15 00:45:09.645256] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=dsa 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:29.349 00:45:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:32.646 00:45:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:32.646 00:08:32.646 real 0m9.676s 00:08:32.646 user 0m3.284s 00:08:32.646 sys 0m0.231s 00:08:32.646 00:45:19 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.646 00:45:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:32.646 ************************************ 00:08:32.646 END TEST accel_crc32c 00:08:32.646 ************************************ 00:08:32.646 00:45:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:32.646 00:45:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:32.646 00:45:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.646 00:45:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.646 ************************************ 00:08:32.646 START TEST accel_crc32c_C2 00:08:32.646 ************************************ 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:32.646 00:45:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:32.646 [2024-05-15 00:45:19.161974] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:32.646 [2024-05-15 00:45:19.162108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302642 ] 00:08:32.646 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.646 [2024-05-15 00:45:19.296165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.646 [2024-05-15 00:45:19.395539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.646 [2024-05-15 00:45:19.400116] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:32.646 [2024-05-15 00:45:19.408060] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.228 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.229 00:45:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:41.765 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:42.025 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:42.025 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:42.025 00:45:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:42.025 00:08:42.025 real 0m9.713s 00:08:42.025 user 0m3.290s 00:08:42.025 sys 0m0.262s 00:08:42.025 00:45:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.025 00:45:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:42.025 ************************************ 00:08:42.025 END TEST accel_crc32c_C2 00:08:42.025 ************************************ 00:08:42.025 00:45:28 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:42.025 00:45:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:42.025 00:45:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.025 00:45:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.025 ************************************ 00:08:42.025 START TEST accel_copy 00:08:42.025 ************************************ 00:08:42.025 00:45:28 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:42.025 00:45:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:42.025 [2024-05-15 00:45:28.924445] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:42.025 [2024-05-15 00:45:28.924552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304590 ] 00:08:42.025 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.025 [2024-05-15 00:45:29.041120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.285 [2024-05-15 00:45:29.142110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.285 [2024-05-15 00:45:29.146612] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:42.286 [2024-05-15 00:45:29.154577] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=dsa 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:48.917 00:45:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.212 00:45:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:52.213 00:45:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:52.213 00:08:52.213 real 0m9.672s 00:08:52.213 user 0m3.272s 00:08:52.213 sys 0m0.239s 00:08:52.213 00:45:38 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.213 00:45:38 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:52.213 ************************************ 00:08:52.213 END TEST accel_copy 00:08:52.213 ************************************ 00:08:52.213 00:45:38 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:52.213 00:45:38 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:52.213 00:45:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.213 00:45:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.213 ************************************ 00:08:52.213 START TEST accel_fill 00:08:52.213 ************************************ 00:08:52.213 00:45:38 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:52.213 00:45:38 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:52.213 [2024-05-15 00:45:38.646886] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:52.213 [2024-05-15 00:45:38.646994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3306384 ] 00:08:52.213 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.213 [2024-05-15 00:45:38.762411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.213 [2024-05-15 00:45:38.862530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.213 [2024-05-15 00:45:38.867057] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:52.213 [2024-05-15 00:45:38.875009] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=dsa 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=dsa 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:58.794 00:45:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:01.336 00:45:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:01.336 00:09:01.336 real 0m9.670s 00:09:01.336 user 0m3.264s 00:09:01.336 sys 0m0.242s 00:09:01.336 00:45:48 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:01.336 00:45:48 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 ************************************ 00:09:01.336 END TEST accel_fill 00:09:01.336 ************************************ 00:09:01.336 00:45:48 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:01.336 00:45:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:01.336 00:45:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:01.336 00:45:48 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 ************************************ 00:09:01.336 START TEST accel_copy_crc32c 00:09:01.336 ************************************ 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:01.336 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:01.337 00:45:48 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:01.337 [2024-05-15 00:45:48.367082] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:01.337 [2024-05-15 00:45:48.367196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308205 ] 00:09:01.597 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.597 [2024-05-15 00:45:48.483195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.597 [2024-05-15 00:45:48.595284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.597 [2024-05-15 00:45:48.599876] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:01.597 [2024-05-15 00:45:48.607831] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=dsa 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.179 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:09:08.180 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:08.180 00:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:11.478 00:09:11.478 real 0m9.690s 00:09:11.478 user 0m3.289s 00:09:11.478 sys 0m0.233s 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.478 00:45:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:11.478 ************************************ 00:09:11.478 END TEST accel_copy_crc32c 00:09:11.478 ************************************ 00:09:11.478 00:45:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:11.478 00:45:58 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:11.478 00:45:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.478 00:45:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:11.478 ************************************ 00:09:11.478 START TEST accel_copy_crc32c_C2 00:09:11.478 ************************************ 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:11.478 00:45:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:11.478 [2024-05-15 00:45:58.114174] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:11.478 [2024-05-15 00:45:58.114284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310245 ] 00:09:11.478 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.478 [2024-05-15 00:45:58.228428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.478 [2024-05-15 00:45:58.327714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.478 [2024-05-15 00:45:58.332222] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:11.478 [2024-05-15 00:45:58.340183] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.057 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.058 00:46:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:21.417 00:09:21.417 real 0m9.670s 00:09:21.417 user 0m3.268s 00:09:21.417 sys 0m0.239s 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.417 00:46:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:21.417 ************************************ 00:09:21.417 END TEST accel_copy_crc32c_C2 00:09:21.417 ************************************ 00:09:21.417 00:46:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:21.417 00:46:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:21.417 00:46:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.417 00:46:07 accel -- common/autotest_common.sh@10 -- # set +x 00:09:21.417 ************************************ 00:09:21.417 START TEST accel_dualcast 00:09:21.417 ************************************ 00:09:21.417 00:46:07 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:21.417 00:46:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:21.417 [2024-05-15 00:46:07.852179] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:21.417 [2024-05-15 00:46:07.852318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312063 ] 00:09:21.417 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.417 [2024-05-15 00:46:07.983178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.417 [2024-05-15 00:46:08.083439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.417 [2024-05-15 00:46:08.088001] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:21.417 [2024-05-15 00:46:08.095952] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dsa 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=dsa 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:27.999 00:46:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.538 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:30.539 00:46:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:30.539 00:09:30.539 real 0m9.709s 00:09:30.539 user 0m3.293s 00:09:30.539 sys 0m0.250s 00:09:30.539 00:46:17 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.539 00:46:17 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 ************************************ 00:09:30.539 END TEST accel_dualcast 00:09:30.539 ************************************ 00:09:30.539 00:46:17 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:30.539 00:46:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:30.539 00:46:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:30.539 00:46:17 accel -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 ************************************ 00:09:30.539 START TEST accel_compare 00:09:30.539 ************************************ 00:09:30.539 00:46:17 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:30.539 00:46:17 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:30.799 [2024-05-15 00:46:17.621841] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:30.799 [2024-05-15 00:46:17.621973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314013 ] 00:09:30.799 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.799 [2024-05-15 00:46:17.756397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.799 [2024-05-15 00:46:17.859223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.059 [2024-05-15 00:46:17.863769] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:31.059 [2024-05-15 00:46:17.871717] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:37.639 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=dsa 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=dsa 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:37.640 00:46:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:40.939 00:46:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:40.939 00:09:40.939 real 0m9.719s 00:09:40.939 user 0m3.284s 00:09:40.939 sys 0m0.277s 00:09:40.939 00:46:27 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.939 00:46:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 ************************************ 00:09:40.939 END TEST accel_compare 00:09:40.939 ************************************ 00:09:40.939 00:46:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:40.939 00:46:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:40.939 00:46:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.939 00:46:27 accel -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 ************************************ 00:09:40.939 START TEST accel_xor 00:09:40.939 ************************************ 00:09:40.939 00:46:27 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:40.939 00:46:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:40.939 [2024-05-15 00:46:27.386051] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:40.939 [2024-05-15 00:46:27.386159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315929 ] 00:09:40.939 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.939 [2024-05-15 00:46:27.500655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.939 [2024-05-15 00:46:27.600033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.939 [2024-05-15 00:46:27.604540] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:40.939 [2024-05-15 00:46:27.612503] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 00:46:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.063 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:50.064 00:09:50.064 real 0m9.668s 00:09:50.064 user 0m0.012s 00:09:50.064 sys 0m0.000s 00:09:50.064 00:46:37 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.064 00:46:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:50.064 ************************************ 00:09:50.064 END TEST accel_xor 00:09:50.064 ************************************ 00:09:50.064 00:46:37 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:50.064 00:46:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:50.064 00:46:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.064 00:46:37 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.064 ************************************ 00:09:50.064 START TEST accel_xor 00:09:50.064 ************************************ 00:09:50.064 00:46:37 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:50.064 00:46:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:50.064 [2024-05-15 00:46:37.107629] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:50.064 [2024-05-15 00:46:37.107738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317727 ] 00:09:50.324 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.324 [2024-05-15 00:46:37.225304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.324 [2024-05-15 00:46:37.327609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.324 [2024-05-15 00:46:37.332094] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:50.324 [2024-05-15 00:46:37.340057] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.963 00:46:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.259 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:00.260 00:46:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:00.260 00:10:00.260 real 0m9.674s 00:10:00.260 user 0m3.288s 00:10:00.260 sys 0m0.225s 00:10:00.260 00:46:46 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:00.260 00:46:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:00.260 ************************************ 00:10:00.260 END TEST accel_xor 00:10:00.260 ************************************ 00:10:00.260 00:46:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:00.260 00:46:46 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:00.260 00:46:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:00.260 00:46:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.260 ************************************ 00:10:00.260 START TEST accel_dif_verify 00:10:00.260 ************************************ 00:10:00.260 00:46:46 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:00.260 00:46:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:00.260 [2024-05-15 00:46:46.829712] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:00.260 [2024-05-15 00:46:46.829785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319798 ] 00:10:00.260 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.260 [2024-05-15 00:46:46.917401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.260 [2024-05-15 00:46:47.015755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.260 [2024-05-15 00:46:47.020256] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:00.260 [2024-05-15 00:46:47.028217] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:06.825 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dsa 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=dsa 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.826 00:46:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:09.359 00:46:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:09.618 00:46:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:10:09.618 00:46:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:09.618 00:46:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:10:09.618 00:10:09.618 real 0m9.625s 00:10:09.618 user 0m3.266s 00:10:09.618 sys 0m0.194s 00:10:09.618 00:46:56 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:09.618 00:46:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:09.618 ************************************ 00:10:09.618 END TEST accel_dif_verify 00:10:09.618 ************************************ 00:10:09.618 00:46:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:09.618 00:46:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:09.618 00:46:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.618 00:46:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.618 ************************************ 00:10:09.618 START TEST accel_dif_generate 00:10:09.618 ************************************ 00:10:09.618 00:46:56 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:09.619 00:46:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:09.619 [2024-05-15 00:46:56.515919] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:09.619 [2024-05-15 00:46:56.516025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321602 ] 00:10:09.619 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.619 [2024-05-15 00:46:56.628848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.879 [2024-05-15 00:46:56.726800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.879 [2024-05-15 00:46:56.731269] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:09.879 [2024-05-15 00:46:56.739238] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.455 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:16.456 00:47:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:19.752 00:47:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.752 00:10:19.752 real 0m9.653s 00:10:19.752 user 0m3.255s 00:10:19.752 sys 0m0.241s 00:10:19.752 00:47:06 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.752 00:47:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:19.752 ************************************ 00:10:19.752 END TEST accel_dif_generate 00:10:19.752 ************************************ 00:10:19.752 00:47:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:19.752 00:47:06 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:19.752 00:47:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.752 00:47:06 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.752 ************************************ 00:10:19.752 START TEST accel_dif_generate_copy 00:10:19.752 ************************************ 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:19.752 00:47:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:19.752 [2024-05-15 00:47:06.228125] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:19.752 [2024-05-15 00:47:06.228236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323392 ] 00:10:19.752 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.752 [2024-05-15 00:47:06.344586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.752 [2024-05-15 00:47:06.444982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.752 [2024-05-15 00:47:06.449557] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:19.752 [2024-05-15 00:47:06.457466] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dsa 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:26.334 00:47:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:10:28.913 00:10:28.913 real 0m9.709s 00:10:28.913 user 0m3.316s 00:10:28.913 sys 0m0.227s 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.913 00:47:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:28.913 ************************************ 00:10:28.913 END TEST accel_dif_generate_copy 00:10:28.913 ************************************ 00:10:28.913 00:47:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:28.913 00:47:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:28.913 00:47:15 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:28.913 00:47:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.913 00:47:15 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.913 ************************************ 00:10:28.913 START TEST accel_comp 00:10:28.913 ************************************ 00:10:28.913 00:47:15 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:28.913 00:47:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:29.197 [2024-05-15 00:47:15.994517] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:29.197 [2024-05-15 00:47:15.994623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325465 ] 00:10:29.197 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.197 [2024-05-15 00:47:16.110144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.197 [2024-05-15 00:47:16.212491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.197 [2024-05-15 00:47:16.217000] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:29.197 [2024-05-15 00:47:16.224962] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=iaa 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=iaa 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.760 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:35.761 00:47:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:39.054 00:47:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:39.054 00:10:39.054 real 0m9.677s 00:10:39.054 user 0m3.291s 00:10:39.054 sys 0m0.221s 00:10:39.054 00:47:25 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:39.054 00:47:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:39.054 ************************************ 00:10:39.054 END TEST accel_comp 00:10:39.054 ************************************ 00:10:39.054 00:47:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:39.054 00:47:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:39.054 00:47:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:39.054 00:47:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.054 ************************************ 00:10:39.054 START TEST accel_decomp 00:10:39.054 ************************************ 00:10:39.054 00:47:25 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:39.054 00:47:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:39.054 [2024-05-15 00:47:25.728977] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:39.054 [2024-05-15 00:47:25.729089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327261 ] 00:10:39.054 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.054 [2024-05-15 00:47:25.845375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.054 [2024-05-15 00:47:25.943700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.054 [2024-05-15 00:47:25.948248] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:39.054 [2024-05-15 00:47:25.956181] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=iaa 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=iaa 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.617 00:47:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:48.910 00:47:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:48.910 00:10:48.910 real 0m9.675s 00:10:48.910 user 0m3.272s 00:10:48.910 sys 0m0.240s 00:10:48.910 00:47:35 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.910 00:47:35 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:48.910 ************************************ 00:10:48.910 END TEST accel_decomp 00:10:48.910 ************************************ 00:10:48.910 00:47:35 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:48.910 00:47:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:48.910 00:47:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.910 00:47:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:48.910 ************************************ 00:10:48.910 START TEST accel_decmop_full 00:10:48.910 ************************************ 00:10:48.910 00:47:35 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:10:48.910 00:47:35 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:10:48.910 [2024-05-15 00:47:35.462643] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:48.910 [2024-05-15 00:47:35.462750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329127 ] 00:10:48.910 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.910 [2024-05-15 00:47:35.580503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.910 [2024-05-15 00:47:35.681885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.910 [2024-05-15 00:47:35.686415] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:48.910 [2024-05-15 00:47:35.694376] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=iaa 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=iaa 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:55.489 00:47:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:58.770 00:47:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:58.770 00:10:58.770 real 0m9.683s 00:10:58.770 user 0m3.281s 00:10:58.770 sys 0m0.232s 00:10:58.770 00:47:45 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.770 00:47:45 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:10:58.770 ************************************ 00:10:58.770 END TEST accel_decmop_full 00:10:58.770 ************************************ 00:10:58.770 00:47:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.770 00:47:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:58.771 00:47:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.771 00:47:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:58.771 ************************************ 00:10:58.771 START TEST accel_decomp_mcore 00:10:58.771 ************************************ 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:58.771 00:47:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:58.771 [2024-05-15 00:47:45.203619] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:58.771 [2024-05-15 00:47:45.203726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331131 ] 00:10:58.771 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.771 [2024-05-15 00:47:45.319822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.771 [2024-05-15 00:47:45.424376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.771 [2024-05-15 00:47:45.424468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.771 [2024-05-15 00:47:45.424580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.771 [2024-05-15 00:47:45.424588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.771 [2024-05-15 00:47:45.429147] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:58.771 [2024-05-15 00:47:45.437126] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=iaa 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:05.356 00:47:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.892 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:11:07.893 00:11:07.893 real 0m9.733s 00:11:07.893 user 0m31.157s 00:11:07.893 sys 0m0.236s 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.893 00:47:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:07.893 ************************************ 00:11:07.893 END TEST accel_decomp_mcore 00:11:07.893 ************************************ 00:11:07.893 00:47:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.893 00:47:54 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:07.893 00:47:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.893 00:47:54 accel -- common/autotest_common.sh@10 -- # set +x 00:11:08.151 ************************************ 00:11:08.151 START TEST accel_decomp_full_mcore 00:11:08.151 ************************************ 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:08.151 00:47:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:08.151 [2024-05-15 00:47:54.999938] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:08.151 [2024-05-15 00:47:55.000050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332929 ] 00:11:08.151 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.151 [2024-05-15 00:47:55.121995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.410 [2024-05-15 00:47:55.229323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.410 [2024-05-15 00:47:55.229429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.410 [2024-05-15 00:47:55.229529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.410 [2024-05-15 00:47:55.229537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.410 [2024-05-15 00:47:55.234079] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:08.410 [2024-05-15 00:47:55.242033] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=iaa 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:14.980 00:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:11:18.269 00:11:18.269 real 0m9.735s 00:11:18.269 user 0m0.011s 00:11:18.269 sys 0m0.001s 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.269 00:48:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:18.269 ************************************ 00:11:18.269 END TEST accel_decomp_full_mcore 00:11:18.269 ************************************ 00:11:18.269 00:48:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:18.269 00:48:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:18.269 00:48:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.269 00:48:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:18.269 ************************************ 00:11:18.269 START TEST accel_decomp_mthread 00:11:18.269 ************************************ 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:18.269 00:48:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:18.269 [2024-05-15 00:48:04.798181] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:18.269 [2024-05-15 00:48:04.798291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335161 ] 00:11:18.269 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.269 [2024-05-15 00:48:04.915210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.269 [2024-05-15 00:48:05.016423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.269 [2024-05-15 00:48:05.021036] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:18.269 [2024-05-15 00:48:05.028998] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=iaa 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:24.903 00:48:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:11:27.434 00:11:27.434 real 0m9.683s 00:11:27.434 user 0m3.286s 00:11:27.434 sys 0m0.238s 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.434 00:48:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:27.434 ************************************ 00:11:27.434 END TEST accel_decomp_mthread 00:11:27.434 ************************************ 00:11:27.434 00:48:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.434 00:48:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:27.434 00:48:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.434 00:48:14 accel -- common/autotest_common.sh@10 -- # set +x 00:11:27.694 ************************************ 00:11:27.694 START TEST accel_decomp_full_mthread 00:11:27.694 ************************************ 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:27.694 00:48:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:27.694 [2024-05-15 00:48:14.543599] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:27.694 [2024-05-15 00:48:14.543707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337361 ] 00:11:27.694 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.694 [2024-05-15 00:48:14.657836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.951 [2024-05-15 00:48:14.758824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.951 [2024-05-15 00:48:14.763352] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:27.952 [2024-05-15 00:48:14.771330] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:34.516 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.516 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.516 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.516 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=iaa 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:34.517 00:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:11:37.799 00:11:37.799 real 0m9.696s 00:11:37.799 user 0m3.275s 00:11:37.799 sys 0m0.250s 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:37.799 00:48:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:37.799 ************************************ 00:11:37.799 END TEST accel_decomp_full_mthread 00:11:37.799 ************************************ 00:11:37.799 00:48:24 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:11:37.799 00:48:24 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:37.799 00:48:24 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:37.799 00:48:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.799 00:48:24 accel -- common/autotest_common.sh@10 -- # set +x 00:11:37.799 00:48:24 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:37.799 00:48:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:37.799 00:48:24 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:11:37.799 00:48:24 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:37.799 00:48:24 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:37.799 00:48:24 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:37.799 00:48:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.799 00:48:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:37.799 00:48:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:37.799 00:48:24 accel -- accel/accel.sh@41 -- # jq -r . 00:11:37.799 ************************************ 00:11:37.799 START TEST accel_dif_functional_tests 00:11:37.799 ************************************ 00:11:37.799 00:48:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:37.799 [2024-05-15 00:48:24.334234] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:37.799 [2024-05-15 00:48:24.334344] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339436 ] 00:11:37.799 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.799 [2024-05-15 00:48:24.452169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.799 [2024-05-15 00:48:24.546112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.799 [2024-05-15 00:48:24.546190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.799 [2024-05-15 00:48:24.546195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.799 [2024-05-15 00:48:24.550767] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:37.799 [2024-05-15 00:48:24.558729] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:45.945 00:11:45.945 00:11:45.945 CUnit - A unit testing framework for C - Version 2.1-3 00:11:45.945 http://cunit.sourceforge.net/ 00:11:45.945 00:11:45.945 00:11:45.945 Suite: accel_dif 00:11:45.945 Test: verify: DIF generated, GUARD check ...passed 00:11:45.945 Test: verify: DIF generated, APPTAG check ...passed 00:11:45.945 Test: verify: DIF generated, REFTAG check ...passed 00:11:45.945 Test: verify: DIF not generated, GUARD check ...[2024-05-15 00:48:31.462654] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.945 [2024-05-15 00:48:31.462707] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.462718] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462728] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462735] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462743] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462750] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.945 [2024-05-15 00:48:31.462759] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.945 [2024-05-15 00:48:31.462766] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.945 [2024-05-15 00:48:31.462785] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:45.945 [2024-05-15 00:48:31.462796] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:11:45.945 [2024-05-15 00:48:31.462823] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:45.945 passed 00:11:45.945 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:48:31.462886] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.945 [2024-05-15 00:48:31.462897] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.462909] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462918] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462927] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462935] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.462943] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.945 [2024-05-15 00:48:31.462949] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.945 [2024-05-15 00:48:31.462957] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.945 [2024-05-15 00:48:31.462966] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:45.945 [2024-05-15 00:48:31.462975] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:45.945 [2024-05-15 00:48:31.462992] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:45.945 passed 00:11:45.945 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:48:31.463030] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.945 [2024-05-15 00:48:31.463042] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.463053] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.463061] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.463067] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.463075] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.945 [2024-05-15 00:48:31.463080] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.945 [2024-05-15 00:48:31.463091] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.945 [2024-05-15 00:48:31.463098] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.945 [2024-05-15 00:48:31.463108] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:45.945 [2024-05-15 00:48:31.463117] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:45.945 [2024-05-15 00:48:31.463139] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:45.945 passed 00:11:45.945 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:45.946 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 00:48:31.463233] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.946 [2024-05-15 00:48:31.463242] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.463251] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463258] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463266] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463273] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463281] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.946 [2024-05-15 00:48:31.463288] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.946 [2024-05-15 00:48:31.463297] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.946 [2024-05-15 00:48:31.463306] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:45.946 [2024-05-15 00:48:31.463314] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:45.946 passed 00:11:45.946 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:45.946 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:45.946 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:45.946 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:48:31.463473] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.946 [2024-05-15 00:48:31.463483] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.463489] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463497] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463503] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463513] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463523] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.946 [2024-05-15 00:48:31.463531] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.946 [2024-05-15 00:48:31.463536] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.946 [2024-05-15 00:48:31.463545] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:45.946 [2024-05-15 00:48:31.463551] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:48:31.463558] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463564] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463572] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463577] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463585] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.946 [2024-05-15 00:48:31.463591] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.946 [2024-05-15 00:48:31.463602] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.946 [2024-05-15 00:48:31.463611] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:45.946 [2024-05-15 00:48:31.463621] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:45.946 [2024-05-15 00:48:31.463629] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:11:45.946 passed[2024-05-15 00:48:31.463639] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:11:45.946 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 00:48:31.463647] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463656] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463662] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463671] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:45.946 [2024-05-15 00:48:31.463678] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:45.946 [2024-05-15 00:48:31.463686] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:45.946 [2024-05-15 00:48:31.463692] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:45.946 passed 00:11:45.946 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:45.946 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:45.946 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-05-15 00:48:31.463834] idxd.c:1565:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:11:45.946 passed 00:11:45.946 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-05-15 00:48:31.463876] idxd.c:1570:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:11:45.946 passed 00:11:45.946 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-05-15 00:48:31.463914] idxd.c:1575:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:11:45.946 passed 00:11:45.946 Test: generate copy: iovecs-len validate ...[2024-05-15 00:48:31.463951] idxd.c:1602:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:11:45.946 passed 00:11:45.946 Test: generate copy: buffer alignment validate ...passed 00:11:45.946 00:11:45.946 Run Summary: Type Total Ran Passed Failed Inactive 00:11:45.946 suites 1 1 n/a 0 0 00:11:45.946 tests 20 20 20 0 0 00:11:45.946 asserts 204 204 204 0 n/a 00:11:45.946 00:11:45.946 Elapsed time = 0.003 seconds 00:11:46.880 00:11:46.880 real 0m9.534s 00:11:46.880 user 0m20.063s 00:11:46.880 sys 0m0.298s 00:11:46.880 00:48:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.880 00:48:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:46.880 ************************************ 00:11:46.880 END TEST accel_dif_functional_tests 00:11:46.880 ************************************ 00:11:46.880 00:11:46.880 real 3m53.079s 00:11:46.880 user 2m31.059s 00:11:46.880 sys 0m7.424s 00:11:46.880 00:48:33 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.880 00:48:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:46.880 ************************************ 00:11:46.880 END TEST accel 00:11:46.880 ************************************ 00:11:46.880 00:48:33 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:46.880 00:48:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.880 00:48:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.880 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:11:46.880 ************************************ 00:11:46.880 START TEST accel_rpc 00:11:46.880 ************************************ 00:11:46.880 00:48:33 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:47.140 * Looking for test storage... 00:11:47.140 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:11:47.140 00:48:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:47.140 00:48:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3341279 00:11:47.140 00:48:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3341279 00:11:47.140 00:48:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3341279 ']' 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:47.140 00:48:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.140 [2024-05-15 00:48:34.057454] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:47.140 [2024-05-15 00:48:34.057578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341279 ] 00:11:47.140 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.140 [2024-05-15 00:48:34.171592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.397 [2024-05-15 00:48:34.265124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.962 00:48:34 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:47.962 00:48:34 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:47.962 00:48:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:47.962 00:48:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:11:47.962 00:48:34 accel_rpc -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:11:47.962 00:48:34 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.962 00:48:34 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 ************************************ 00:11:47.963 START TEST accel_scan_dsa_modules 00:11:47.963 ************************************ 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1121 -- # accel_scan_dsa_modules_test_suite 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 [2024-05-15 00:48:34.769623] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@648 -- # local es=0 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@651 -- # rpc_cmd dsa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 request: 00:11:47.963 { 00:11:47.963 "method": "dsa_scan_accel_module", 00:11:47.963 "req_id": 1 00:11:47.963 } 00:11:47.963 Got JSON-RPC error response 00:11:47.963 response: 00:11:47.963 { 00:11:47.963 "code": -114, 00:11:47.963 "message": "Operation already in progress" 00:11:47.963 } 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@651 -- # es=1 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.963 00:11:47.963 real 0m0.020s 00:11:47.963 user 0m0.003s 00:11:47.963 sys 0m0.003s 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 ************************************ 00:11:47.963 END TEST accel_scan_dsa_modules 00:11:47.963 ************************************ 00:11:47.963 00:48:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:47.963 00:48:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:11:47.963 00:48:34 accel_rpc -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 ************************************ 00:11:47.963 START TEST accel_scan_iaa_modules 00:11:47.963 ************************************ 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1121 -- # accel_scan_iaa_modules_test_suite 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 [2024-05-15 00:48:34.841614] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@648 -- # local es=0 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@651 -- # rpc_cmd iaa_scan_accel_module 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 request: 00:11:47.963 { 00:11:47.963 "method": "iaa_scan_accel_module", 00:11:47.963 "req_id": 1 00:11:47.963 } 00:11:47.963 Got JSON-RPC error response 00:11:47.963 response: 00:11:47.963 { 00:11:47.963 "code": -114, 00:11:47.963 "message": "Operation already in progress" 00:11:47.963 } 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@651 -- # es=1 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.963 00:11:47.963 real 0m0.021s 00:11:47.963 user 0m0.003s 00:11:47.963 sys 0m0.002s 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 ************************************ 00:11:47.963 END TEST accel_scan_iaa_modules 00:11:47.963 ************************************ 00:11:47.963 00:48:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 ************************************ 00:11:47.963 START TEST accel_assign_opcode 00:11:47.963 ************************************ 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 [2024-05-15 00:48:34.913650] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 [2024-05-15 00:48:34.921651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.963 00:48:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.077 software 00:11:56.077 00:11:56.077 real 0m7.188s 00:11:56.077 user 0m0.036s 00:11:56.077 sys 0m0.008s 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.077 00:48:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 ************************************ 00:11:56.077 END TEST accel_assign_opcode 00:11:56.077 ************************************ 00:11:56.077 00:48:42 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3341279 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3341279 ']' 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3341279 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3341279 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3341279' 00:11:56.077 killing process with pid 3341279 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@965 -- # kill 3341279 00:11:56.077 00:48:42 accel_rpc -- common/autotest_common.sh@970 -- # wait 3341279 00:11:57.977 00:11:57.977 real 0m11.080s 00:11:57.977 user 0m4.053s 00:11:57.977 sys 0m0.681s 00:11:57.977 00:48:44 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.977 00:48:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.977 ************************************ 00:11:57.977 END TEST accel_rpc 00:11:57.977 ************************************ 00:11:57.977 00:48:45 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:57.977 00:48:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:57.977 00:48:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.977 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 ************************************ 00:11:58.235 START TEST app_cmdline 00:11:58.235 ************************************ 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:58.235 * Looking for test storage... 00:11:58.235 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:58.235 00:48:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:58.235 00:48:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3343459 00:11:58.235 00:48:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3343459 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3343459 ']' 00:11:58.235 00:48:45 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:58.235 00:48:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 [2024-05-15 00:48:45.191481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:58.235 [2024-05-15 00:48:45.191597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343459 ] 00:11:58.235 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.494 [2024-05-15 00:48:45.321306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.494 [2024-05-15 00:48:45.414267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.061 00:48:45 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:59.061 00:48:45 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:11:59.061 00:48:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:59.061 { 00:11:59.061 "version": "SPDK v24.05-pre git sha1 c06b0c79b", 00:11:59.061 "fields": { 00:11:59.061 "major": 24, 00:11:59.061 "minor": 5, 00:11:59.061 "patch": 0, 00:11:59.061 "suffix": "-pre", 00:11:59.061 "commit": "c06b0c79b" 00:11:59.061 } 00:11:59.061 } 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:59.061 00:48:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:11:59.061 00:48:46 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:59.318 request: 00:11:59.318 { 00:11:59.318 "method": "env_dpdk_get_mem_stats", 00:11:59.318 "req_id": 1 00:11:59.318 } 00:11:59.318 Got JSON-RPC error response 00:11:59.318 response: 00:11:59.319 { 00:11:59.319 "code": -32601, 00:11:59.319 "message": "Method not found" 00:11:59.319 } 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.319 00:48:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3343459 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3343459 ']' 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3343459 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3343459 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3343459' 00:11:59.319 killing process with pid 3343459 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@965 -- # kill 3343459 00:11:59.319 00:48:46 app_cmdline -- common/autotest_common.sh@970 -- # wait 3343459 00:12:00.256 00:12:00.256 real 0m2.097s 00:12:00.256 user 0m2.238s 00:12:00.256 sys 0m0.498s 00:12:00.256 00:48:47 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:00.256 00:48:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:00.256 ************************************ 00:12:00.256 END TEST app_cmdline 00:12:00.256 ************************************ 00:12:00.256 00:48:47 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:12:00.256 00:48:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:00.256 00:48:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.256 00:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:00.256 ************************************ 00:12:00.256 START TEST version 00:12:00.256 ************************************ 00:12:00.256 00:48:47 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:12:00.256 * Looking for test storage... 00:12:00.256 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:12:00.256 00:48:47 version -- app/version.sh@17 -- # get_header_version major 00:12:00.256 00:48:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # tr -d '"' 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # cut -f2 00:12:00.256 00:48:47 version -- app/version.sh@17 -- # major=24 00:12:00.256 00:48:47 version -- app/version.sh@18 -- # get_header_version minor 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # tr -d '"' 00:12:00.256 00:48:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # cut -f2 00:12:00.256 00:48:47 version -- app/version.sh@18 -- # minor=5 00:12:00.256 00:48:47 version -- app/version.sh@19 -- # get_header_version patch 00:12:00.256 00:48:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # cut -f2 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # tr -d '"' 00:12:00.256 00:48:47 version -- app/version.sh@19 -- # patch=0 00:12:00.256 00:48:47 version -- app/version.sh@20 -- # get_header_version suffix 00:12:00.256 00:48:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # tr -d '"' 00:12:00.256 00:48:47 version -- app/version.sh@14 -- # cut -f2 00:12:00.256 00:48:47 version -- app/version.sh@20 -- # suffix=-pre 00:12:00.256 00:48:47 version -- app/version.sh@22 -- # version=24.5 00:12:00.256 00:48:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:00.256 00:48:47 version -- app/version.sh@28 -- # version=24.5rc0 00:12:00.256 00:48:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:00.256 00:48:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:00.515 00:48:47 version -- app/version.sh@30 -- # py_version=24.5rc0 00:12:00.515 00:48:47 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:12:00.515 00:12:00.515 real 0m0.134s 00:12:00.515 user 0m0.058s 00:12:00.515 sys 0m0.108s 00:12:00.515 00:48:47 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:00.515 00:48:47 version -- common/autotest_common.sh@10 -- # set +x 00:12:00.515 ************************************ 00:12:00.515 END TEST version 00:12:00.515 ************************************ 00:12:00.515 00:48:47 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@194 -- # uname -s 00:12:00.515 00:48:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:00.515 00:48:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:00.515 00:48:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:00.515 00:48:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:12:00.515 00:48:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.515 00:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:00.515 00:48:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:12:00.515 00:48:47 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:12:00.515 00:48:47 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:00.515 00:48:47 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:00.515 00:48:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.515 00:48:47 -- common/autotest_common.sh@10 -- # set +x 00:12:00.515 ************************************ 00:12:00.515 START TEST nvmf_tcp 00:12:00.515 ************************************ 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:00.515 * Looking for test storage... 00:12:00.515 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:00.515 00:48:47 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.515 00:48:47 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.515 00:48:47 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.515 00:48:47 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.515 00:48:47 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.515 00:48:47 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.515 00:48:47 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:12:00.515 00:48:47 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:12:00.515 00:48:47 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:00.515 00:48:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.773 ************************************ 00:12:00.773 START TEST nvmf_example 00:12:00.773 ************************************ 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:00.773 * Looking for test storage... 00:12:00.773 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:00.773 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:00.774 00:48:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.342 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:07.343 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:07.343 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:07.343 Found net devices under 0000:27:00.0: cvl_0_0 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:07.343 Found net devices under 0000:27:00.1: cvl_0_1 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:12:07.343 00:12:07.343 --- 10.0.0.2 ping statistics --- 00:12:07.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.343 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:12:07.343 00:12:07.343 --- 10.0.0.1 ping statistics --- 00:12:07.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.343 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3347722 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3347722 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3347722 ']' 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 00:48:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:07.343 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:07.344 00:48:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:07.602 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.878 Initializing NVMe Controllers 00:12:19.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:19.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:19.878 Initialization complete. Launching workers. 00:12:19.878 ======================================================== 00:12:19.878 Latency(us) 00:12:19.878 Device Information : IOPS MiB/s Average min max 00:12:19.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18481.79 72.19 3462.51 673.91 16388.27 00:12:19.878 ======================================================== 00:12:19.878 Total : 18481.79 72.19 3462.51 673.91 16388.27 00:12:19.878 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.878 rmmod nvme_tcp 00:12:19.878 rmmod nvme_fabrics 00:12:19.878 rmmod nvme_keyring 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:19.878 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3347722 ']' 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3347722 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3347722 ']' 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3347722 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3347722 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3347722' 00:12:19.879 killing process with pid 3347722 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3347722 00:12:19.879 00:49:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3347722 00:12:19.879 nvmf threads initialize successfully 00:12:19.879 bdev subsystem init successfully 00:12:19.879 created a nvmf target service 00:12:19.879 create targets's poll groups done 00:12:19.879 all subsystems of target started 00:12:19.879 nvmf target is running 00:12:19.879 all subsystems of target stopped 00:12:19.879 destroy targets's poll groups done 00:12:19.879 destroyed the nvmf target service 00:12:19.879 bdev subsystem finish successfully 00:12:19.879 nvmf threads destroy successfully 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.879 00:49:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.448 00:12:20.448 real 0m19.858s 00:12:20.448 user 0m47.083s 00:12:20.448 sys 0m5.333s 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.448 00:49:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.448 ************************************ 00:12:20.448 END TEST nvmf_example 00:12:20.448 ************************************ 00:12:20.448 00:49:07 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:20.448 00:49:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:20.448 00:49:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:20.448 00:49:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.448 ************************************ 00:12:20.448 START TEST nvmf_filesystem 00:12:20.448 ************************************ 00:12:20.448 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:20.710 * Looking for test storage... 00:12:20.710 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:20.710 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:20.711 #define SPDK_CONFIG_H 00:12:20.711 #define SPDK_CONFIG_APPS 1 00:12:20.711 #define SPDK_CONFIG_ARCH native 00:12:20.711 #define SPDK_CONFIG_ASAN 1 00:12:20.711 #undef SPDK_CONFIG_AVAHI 00:12:20.711 #undef SPDK_CONFIG_CET 00:12:20.711 #define SPDK_CONFIG_COVERAGE 1 00:12:20.711 #define SPDK_CONFIG_CROSS_PREFIX 00:12:20.711 #undef SPDK_CONFIG_CRYPTO 00:12:20.711 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:20.711 #undef SPDK_CONFIG_CUSTOMOCF 00:12:20.711 #undef SPDK_CONFIG_DAOS 00:12:20.711 #define SPDK_CONFIG_DAOS_DIR 00:12:20.711 #define SPDK_CONFIG_DEBUG 1 00:12:20.711 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:20.711 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:12:20.711 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:20.711 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:20.711 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:20.711 #undef SPDK_CONFIG_DPDK_UADK 00:12:20.711 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:12:20.711 #define SPDK_CONFIG_EXAMPLES 1 00:12:20.711 #undef SPDK_CONFIG_FC 00:12:20.711 #define SPDK_CONFIG_FC_PATH 00:12:20.711 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:20.711 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:20.711 #undef SPDK_CONFIG_FUSE 00:12:20.711 #undef SPDK_CONFIG_FUZZER 00:12:20.711 #define SPDK_CONFIG_FUZZER_LIB 00:12:20.711 #undef SPDK_CONFIG_GOLANG 00:12:20.711 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:20.711 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:20.711 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:20.711 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:12:20.711 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:20.711 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:20.711 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:20.711 #define SPDK_CONFIG_IDXD 1 00:12:20.711 #undef SPDK_CONFIG_IDXD_KERNEL 00:12:20.711 #undef SPDK_CONFIG_IPSEC_MB 00:12:20.711 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:20.711 #define SPDK_CONFIG_ISAL 1 00:12:20.711 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:20.711 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:20.711 #define SPDK_CONFIG_LIBDIR 00:12:20.711 #undef SPDK_CONFIG_LTO 00:12:20.711 #define SPDK_CONFIG_MAX_LCORES 00:12:20.711 #define SPDK_CONFIG_NVME_CUSE 1 00:12:20.711 #undef SPDK_CONFIG_OCF 00:12:20.711 #define SPDK_CONFIG_OCF_PATH 00:12:20.711 #define SPDK_CONFIG_OPENSSL_PATH 00:12:20.711 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:20.711 #define SPDK_CONFIG_PGO_DIR 00:12:20.711 #undef SPDK_CONFIG_PGO_USE 00:12:20.711 #define SPDK_CONFIG_PREFIX /usr/local 00:12:20.711 #undef SPDK_CONFIG_RAID5F 00:12:20.711 #undef SPDK_CONFIG_RBD 00:12:20.711 #define SPDK_CONFIG_RDMA 1 00:12:20.711 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:20.711 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:20.711 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:20.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:20.711 #define SPDK_CONFIG_SHARED 1 00:12:20.711 #undef SPDK_CONFIG_SMA 00:12:20.711 #define SPDK_CONFIG_TESTS 1 00:12:20.711 #undef SPDK_CONFIG_TSAN 00:12:20.711 #define SPDK_CONFIG_UBLK 1 00:12:20.711 #define SPDK_CONFIG_UBSAN 1 00:12:20.711 #undef SPDK_CONFIG_UNIT_TESTS 00:12:20.711 #undef SPDK_CONFIG_URING 00:12:20.711 #define SPDK_CONFIG_URING_PATH 00:12:20.711 #undef SPDK_CONFIG_URING_ZNS 00:12:20.711 #undef SPDK_CONFIG_USDT 00:12:20.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:20.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:20.711 #undef SPDK_CONFIG_VFIO_USER 00:12:20.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:20.711 #define SPDK_CONFIG_VHOST 1 00:12:20.711 #define SPDK_CONFIG_VIRTIO 1 00:12:20.711 #undef SPDK_CONFIG_VTUNE 00:12:20.711 #define SPDK_CONFIG_VTUNE_DIR 00:12:20.711 #define SPDK_CONFIG_WERROR 1 00:12:20.711 #define SPDK_CONFIG_WPDK_DIR 00:12:20.711 #undef SPDK_CONFIG_XNVME 00:12:20.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.711 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power ]] 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 1 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:12:20.712 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 1 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 1 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j128 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3350500 ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3350500 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.ODTYkF 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ODTYkF/tests/target /tmp/spdk.ODTYkF 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:12:20.713 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972791808 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311638016 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=123943464960 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129472483328 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5529018368 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64731529216 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64736239616 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25884815360 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25894498304 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9682944 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=66560 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=437248 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64735817728 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64736243712 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=425984 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12947243008 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12947247104 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:12:20.714 * Looking for test storage... 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=123943464960 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=7743610880 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.714 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.714 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.715 00:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.050 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:26.051 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:26.051 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:26.051 Found net devices under 0000:27:00.0: cvl_0_0 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:26.051 Found net devices under 0000:27:00.1: cvl_0_1 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.051 00:49:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:12:26.051 00:12:26.051 --- 10.0.0.2 ping statistics --- 00:12:26.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.051 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:12:26.051 00:12:26.051 --- 10.0.0.1 ping statistics --- 00:12:26.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.051 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.051 00:49:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.312 ************************************ 00:12:26.312 START TEST nvmf_filesystem_no_in_capsule 00:12:26.312 ************************************ 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3354029 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3354029 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3354029 ']' 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.312 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.312 [2024-05-15 00:49:13.229291] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:12:26.312 [2024-05-15 00:49:13.229396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.312 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.312 [2024-05-15 00:49:13.352604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.571 [2024-05-15 00:49:13.447979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.571 [2024-05-15 00:49:13.448021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.571 [2024-05-15 00:49:13.448031] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.571 [2024-05-15 00:49:13.448041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.571 [2024-05-15 00:49:13.448052] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.571 [2024-05-15 00:49:13.448246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.571 [2024-05-15 00:49:13.448327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.571 [2024-05-15 00:49:13.448422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.571 [2024-05-15 00:49:13.448432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.137 [2024-05-15 00:49:13.979355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.137 00:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.396 Malloc1 00:12:27.396 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 [2024-05-15 00:49:14.242204] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.397 [2024-05-15 00:49:14.242485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:12:27.397 { 00:12:27.397 "name": "Malloc1", 00:12:27.397 "aliases": [ 00:12:27.397 "ef0511b5-8a77-464e-b1d0-aae1ab5a1c1e" 00:12:27.397 ], 00:12:27.397 "product_name": "Malloc disk", 00:12:27.397 "block_size": 512, 00:12:27.397 "num_blocks": 1048576, 00:12:27.397 "uuid": "ef0511b5-8a77-464e-b1d0-aae1ab5a1c1e", 00:12:27.397 "assigned_rate_limits": { 00:12:27.397 "rw_ios_per_sec": 0, 00:12:27.397 "rw_mbytes_per_sec": 0, 00:12:27.397 "r_mbytes_per_sec": 0, 00:12:27.397 "w_mbytes_per_sec": 0 00:12:27.397 }, 00:12:27.397 "claimed": true, 00:12:27.397 "claim_type": "exclusive_write", 00:12:27.397 "zoned": false, 00:12:27.397 "supported_io_types": { 00:12:27.397 "read": true, 00:12:27.397 "write": true, 00:12:27.397 "unmap": true, 00:12:27.397 "write_zeroes": true, 00:12:27.397 "flush": true, 00:12:27.397 "reset": true, 00:12:27.397 "compare": false, 00:12:27.397 "compare_and_write": false, 00:12:27.397 "abort": true, 00:12:27.397 "nvme_admin": false, 00:12:27.397 "nvme_io": false 00:12:27.397 }, 00:12:27.397 "memory_domains": [ 00:12:27.397 { 00:12:27.397 "dma_device_id": "system", 00:12:27.397 "dma_device_type": 1 00:12:27.397 }, 00:12:27.397 { 00:12:27.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.397 "dma_device_type": 2 00:12:27.397 } 00:12:27.397 ], 00:12:27.397 "driver_specific": {} 00:12:27.397 } 00:12:27.397 ]' 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:27.397 00:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.776 00:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.776 00:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:12:28.776 00:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.776 00:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:28.776 00:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:12:30.681 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:30.681 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:30.681 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:30.941 00:49:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:31.508 00:49:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.885 ************************************ 00:12:32.885 START TEST filesystem_ext4 00:12:32.885 ************************************ 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:12:32.885 00:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:32.885 mke2fs 1.46.5 (30-Dec-2021) 00:12:32.885 Discarding device blocks: 0/522240 done 00:12:32.885 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:32.885 Filesystem UUID: 1c30a976-d609-454a-a70a-f32905de6921 00:12:32.885 Superblock backups stored on blocks: 00:12:32.885 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:32.885 00:12:32.885 Allocating group tables: 0/64 done 00:12:32.885 Writing inode tables: 0/64 done 00:12:36.166 Creating journal (8192 blocks): done 00:12:36.425 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:12:36.425 00:12:36.425 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:12:36.425 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.358 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3354029 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.616 00:12:37.616 real 0m4.875s 00:12:37.616 user 0m0.023s 00:12:37.616 sys 0m0.041s 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 ************************************ 00:12:37.616 END TEST filesystem_ext4 00:12:37.616 ************************************ 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.616 ************************************ 00:12:37.616 START TEST filesystem_btrfs 00:12:37.616 ************************************ 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:12:37.616 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:37.874 btrfs-progs v6.6.2 00:12:37.874 See https://btrfs.readthedocs.io for more information. 00:12:37.874 00:12:37.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:37.874 NOTE: several default settings have changed in version 5.15, please make sure 00:12:37.874 this does not affect your deployments: 00:12:37.874 - DUP for metadata (-m dup) 00:12:37.874 - enabled no-holes (-O no-holes) 00:12:37.874 - enabled free-space-tree (-R free-space-tree) 00:12:37.874 00:12:37.874 Label: (null) 00:12:37.874 UUID: 34b31d31-b6ed-4418-b331-ceafbfb2149d 00:12:37.874 Node size: 16384 00:12:37.874 Sector size: 4096 00:12:37.874 Filesystem size: 510.00MiB 00:12:37.874 Block group profiles: 00:12:37.874 Data: single 8.00MiB 00:12:37.874 Metadata: DUP 32.00MiB 00:12:37.874 System: DUP 8.00MiB 00:12:37.874 SSD detected: yes 00:12:37.874 Zoned device: no 00:12:37.874 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:37.874 Runtime features: free-space-tree 00:12:37.874 Checksum: crc32c 00:12:37.874 Number of devices: 1 00:12:37.874 Devices: 00:12:37.874 ID SIZE PATH 00:12:37.874 1 510.00MiB /dev/nvme0n1p1 00:12:37.874 00:12:37.874 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:12:37.874 00:49:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3354029 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:38.810 00:12:38.810 real 0m1.058s 00:12:38.810 user 0m0.017s 00:12:38.810 sys 0m0.060s 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:38.810 ************************************ 00:12:38.810 END TEST filesystem_btrfs 00:12:38.810 ************************************ 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.810 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.810 ************************************ 00:12:38.810 START TEST filesystem_xfs 00:12:38.810 ************************************ 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:12:38.811 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:38.811 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:38.811 = sectsz=512 attr=2, projid32bit=1 00:12:38.811 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:38.811 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:38.811 data = bsize=4096 blocks=130560, imaxpct=25 00:12:38.811 = sunit=0 swidth=0 blks 00:12:38.811 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:38.811 log =internal log bsize=4096 blocks=16384, version=2 00:12:38.811 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:38.811 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:39.746 Discarding blocks...Done. 00:12:39.746 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:12:39.746 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3354029 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:42.317 00:12:42.317 real 0m3.316s 00:12:42.317 user 0m0.024s 00:12:42.317 sys 0m0.041s 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.317 00:49:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:42.317 ************************************ 00:12:42.317 END TEST filesystem_xfs 00:12:42.317 ************************************ 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3354029 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3354029 ']' 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3354029 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3354029 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:42.317 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:42.318 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3354029' 00:12:42.318 killing process with pid 3354029 00:12:42.318 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3354029 00:12:42.318 [2024-05-15 00:49:29.185214] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:42.318 00:49:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3354029 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:43.256 00:12:43.256 real 0m16.973s 00:12:43.256 user 1m5.979s 00:12:43.256 sys 0m1.034s 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 ************************************ 00:12:43.256 END TEST nvmf_filesystem_no_in_capsule 00:12:43.256 ************************************ 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 ************************************ 00:12:43.256 START TEST nvmf_filesystem_in_capsule 00:12:43.256 ************************************ 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3357516 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3357516 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3357516 ']' 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.256 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:43.257 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.257 [2024-05-15 00:49:30.279858] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:12:43.257 [2024-05-15 00:49:30.279968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.515 [2024-05-15 00:49:30.408900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.515 [2024-05-15 00:49:30.509615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.515 [2024-05-15 00:49:30.509659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.515 [2024-05-15 00:49:30.509670] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.515 [2024-05-15 00:49:30.509682] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.515 [2024-05-15 00:49:30.509689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.515 [2024-05-15 00:49:30.509796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.515 [2024-05-15 00:49:30.509884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.515 [2024-05-15 00:49:30.510024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.515 [2024-05-15 00:49:30.510035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.085 [2024-05-15 00:49:31.049593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.085 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.345 Malloc1 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.345 [2024-05-15 00:49:31.315248] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:44.345 [2024-05-15 00:49:31.315607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:12:44.345 { 00:12:44.345 "name": "Malloc1", 00:12:44.345 "aliases": [ 00:12:44.345 "1428f421-1608-4702-95a9-8d07a89ad848" 00:12:44.345 ], 00:12:44.345 "product_name": "Malloc disk", 00:12:44.345 "block_size": 512, 00:12:44.345 "num_blocks": 1048576, 00:12:44.345 "uuid": "1428f421-1608-4702-95a9-8d07a89ad848", 00:12:44.345 "assigned_rate_limits": { 00:12:44.345 "rw_ios_per_sec": 0, 00:12:44.345 "rw_mbytes_per_sec": 0, 00:12:44.345 "r_mbytes_per_sec": 0, 00:12:44.345 "w_mbytes_per_sec": 0 00:12:44.345 }, 00:12:44.345 "claimed": true, 00:12:44.345 "claim_type": "exclusive_write", 00:12:44.345 "zoned": false, 00:12:44.345 "supported_io_types": { 00:12:44.345 "read": true, 00:12:44.345 "write": true, 00:12:44.345 "unmap": true, 00:12:44.345 "write_zeroes": true, 00:12:44.345 "flush": true, 00:12:44.345 "reset": true, 00:12:44.345 "compare": false, 00:12:44.345 "compare_and_write": false, 00:12:44.345 "abort": true, 00:12:44.345 "nvme_admin": false, 00:12:44.345 "nvme_io": false 00:12:44.345 }, 00:12:44.345 "memory_domains": [ 00:12:44.345 { 00:12:44.345 "dma_device_id": "system", 00:12:44.345 "dma_device_type": 1 00:12:44.345 }, 00:12:44.345 { 00:12:44.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.345 "dma_device_type": 2 00:12:44.345 } 00:12:44.345 ], 00:12:44.345 "driver_specific": {} 00:12:44.345 } 00:12:44.345 ]' 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:12:44.345 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:12:44.604 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:12:44.605 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:12:44.605 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:12:44.605 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:44.605 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.981 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.981 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:12:45.981 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.981 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:45.981 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:47.885 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.143 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:48.713 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.097 ************************************ 00:12:50.097 START TEST filesystem_in_capsule_ext4 00:12:50.097 ************************************ 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:12:50.097 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:50.097 mke2fs 1.46.5 (30-Dec-2021) 00:12:50.097 Discarding device blocks: 0/522240 done 00:12:50.097 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:50.097 Filesystem UUID: bedaeeac-fe36-40fe-b51f-1148d56fb873 00:12:50.097 Superblock backups stored on blocks: 00:12:50.097 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:50.097 00:12:50.097 Allocating group tables: 0/64 done 00:12:50.097 Writing inode tables: 0/64 done 00:12:51.035 Creating journal (8192 blocks): done 00:12:51.602 Writing superblocks and filesystem accounting information: 0/64 done 00:12:51.602 00:12:51.602 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:12:51.602 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3357516 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:52.540 00:12:52.540 real 0m2.754s 00:12:52.540 user 0m0.025s 00:12:52.540 sys 0m0.035s 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 ************************************ 00:12:52.540 END TEST filesystem_in_capsule_ext4 00:12:52.540 ************************************ 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 ************************************ 00:12:52.540 START TEST filesystem_in_capsule_btrfs 00:12:52.540 ************************************ 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:12:52.540 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:52.541 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:12:52.541 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:12:52.541 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:12:52.541 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:12:52.541 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:52.799 btrfs-progs v6.6.2 00:12:52.799 See https://btrfs.readthedocs.io for more information. 00:12:52.799 00:12:52.799 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:52.799 NOTE: several default settings have changed in version 5.15, please make sure 00:12:52.799 this does not affect your deployments: 00:12:52.799 - DUP for metadata (-m dup) 00:12:52.799 - enabled no-holes (-O no-holes) 00:12:52.799 - enabled free-space-tree (-R free-space-tree) 00:12:52.799 00:12:52.799 Label: (null) 00:12:52.799 UUID: 77e64c4b-09d9-4bca-a81c-19fef0ba03fe 00:12:52.799 Node size: 16384 00:12:52.799 Sector size: 4096 00:12:52.799 Filesystem size: 510.00MiB 00:12:52.799 Block group profiles: 00:12:52.799 Data: single 8.00MiB 00:12:52.799 Metadata: DUP 32.00MiB 00:12:52.799 System: DUP 8.00MiB 00:12:52.799 SSD detected: yes 00:12:52.799 Zoned device: no 00:12:52.799 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:52.799 Runtime features: free-space-tree 00:12:52.799 Checksum: crc32c 00:12:52.799 Number of devices: 1 00:12:52.799 Devices: 00:12:52.799 ID SIZE PATH 00:12:52.799 1 510.00MiB /dev/nvme0n1p1 00:12:52.799 00:12:52.799 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:12:52.799 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3357516 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:53.368 00:12:53.368 real 0m0.840s 00:12:53.368 user 0m0.018s 00:12:53.368 sys 0m0.056s 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.368 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:53.368 ************************************ 00:12:53.368 END TEST filesystem_in_capsule_btrfs 00:12:53.368 ************************************ 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.628 ************************************ 00:12:53.628 START TEST filesystem_in_capsule_xfs 00:12:53.628 ************************************ 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:12:53.628 00:49:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:53.628 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:53.628 = sectsz=512 attr=2, projid32bit=1 00:12:53.628 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:53.628 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:53.628 data = bsize=4096 blocks=130560, imaxpct=25 00:12:53.628 = sunit=0 swidth=0 blks 00:12:53.628 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:53.628 log =internal log bsize=4096 blocks=16384, version=2 00:12:53.628 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:53.629 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:54.195 Discarding blocks...Done. 00:12:54.195 00:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:12:54.195 00:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3357516 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:56.734 00:12:56.734 real 0m3.154s 00:12:56.734 user 0m0.019s 00:12:56.734 sys 0m0.053s 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:56.734 ************************************ 00:12:56.734 END TEST filesystem_in_capsule_xfs 00:12:56.734 ************************************ 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:56.734 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3357516 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3357516 ']' 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3357516 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3357516 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3357516' 00:12:56.994 killing process with pid 3357516 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3357516 00:12:56.994 [2024-05-15 00:49:43.909118] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:56.994 00:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3357516 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:57.929 00:12:57.929 real 0m14.663s 00:12:57.929 user 0m56.754s 00:12:57.929 sys 0m1.060s 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.929 ************************************ 00:12:57.929 END TEST nvmf_filesystem_in_capsule 00:12:57.929 ************************************ 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.929 rmmod nvme_tcp 00:12:57.929 rmmod nvme_fabrics 00:12:57.929 rmmod nvme_keyring 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.929 00:49:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.461 00:49:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.461 00:13:00.461 real 0m39.511s 00:13:00.461 user 2m4.306s 00:13:00.461 sys 0m6.257s 00:13:00.461 00:49:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.461 00:49:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:00.461 ************************************ 00:13:00.461 END TEST nvmf_filesystem 00:13:00.461 ************************************ 00:13:00.461 00:49:47 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:00.461 00:49:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.461 00:49:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.461 00:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.461 ************************************ 00:13:00.461 START TEST nvmf_target_discovery 00:13:00.461 ************************************ 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:00.461 * Looking for test storage... 00:13:00.461 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.461 00:49:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:05.770 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:05.770 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.770 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:05.771 Found net devices under 0000:27:00.0: cvl_0_0 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:05.771 Found net devices under 0000:27:00.1: cvl_0_1 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:05.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:13:05.771 00:13:05.771 --- 10.0.0.2 ping statistics --- 00:13:05.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.771 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:05.771 00:13:05.771 --- 10.0.0.1 ping statistics --- 00:13:05.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.771 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3364542 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3364542 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3364542 ']' 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.771 00:49:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.771 [2024-05-15 00:49:52.397396] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:05.771 [2024-05-15 00:49:52.397464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.771 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.771 [2024-05-15 00:49:52.488834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.771 [2024-05-15 00:49:52.584123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.771 [2024-05-15 00:49:52.584162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.771 [2024-05-15 00:49:52.584172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.771 [2024-05-15 00:49:52.584181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.771 [2024-05-15 00:49:52.584189] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.771 [2024-05-15 00:49:52.584300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.771 [2024-05-15 00:49:52.584385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.771 [2024-05-15 00:49:52.584485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.771 [2024-05-15 00:49:52.584501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 [2024-05-15 00:49:53.165749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 Null1 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 [2024-05-15 00:49:53.213731] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:06.339 [2024-05-15 00:49:53.213972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 Null2 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.339 Null3 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.339 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 Null4 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 4420 00:13:06.340 00:13:06.340 Discovery Log Number of Records 6, Generation counter 6 00:13:06.340 =====Discovery Log Entry 0====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: current discovery subsystem 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4420 00:13:06.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: explicit discovery connections, duplicate discovery information 00:13:06.340 sectype: none 00:13:06.340 =====Discovery Log Entry 1====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: nvme subsystem 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4420 00:13:06.340 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: none 00:13:06.340 sectype: none 00:13:06.340 =====Discovery Log Entry 2====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: nvme subsystem 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4420 00:13:06.340 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: none 00:13:06.340 sectype: none 00:13:06.340 =====Discovery Log Entry 3====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: nvme subsystem 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4420 00:13:06.340 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: none 00:13:06.340 sectype: none 00:13:06.340 =====Discovery Log Entry 4====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: nvme subsystem 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4420 00:13:06.340 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: none 00:13:06.340 sectype: none 00:13:06.340 =====Discovery Log Entry 5====== 00:13:06.340 trtype: tcp 00:13:06.340 adrfam: ipv4 00:13:06.340 subtype: discovery subsystem referral 00:13:06.340 treq: not required 00:13:06.340 portid: 0 00:13:06.340 trsvcid: 4430 00:13:06.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:06.340 traddr: 10.0.0.2 00:13:06.340 eflags: none 00:13:06.340 sectype: none 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:06.340 Perform nvmf subsystem discovery via RPC 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.340 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.600 [ 00:13:06.600 { 00:13:06.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:06.600 "subtype": "Discovery", 00:13:06.600 "listen_addresses": [ 00:13:06.600 { 00:13:06.600 "trtype": "TCP", 00:13:06.600 "adrfam": "IPv4", 00:13:06.600 "traddr": "10.0.0.2", 00:13:06.600 "trsvcid": "4420" 00:13:06.600 } 00:13:06.600 ], 00:13:06.600 "allow_any_host": true, 00:13:06.600 "hosts": [] 00:13:06.600 }, 00:13:06.600 { 00:13:06.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.600 "subtype": "NVMe", 00:13:06.600 "listen_addresses": [ 00:13:06.600 { 00:13:06.600 "trtype": "TCP", 00:13:06.600 "adrfam": "IPv4", 00:13:06.600 "traddr": "10.0.0.2", 00:13:06.600 "trsvcid": "4420" 00:13:06.600 } 00:13:06.600 ], 00:13:06.600 "allow_any_host": true, 00:13:06.600 "hosts": [], 00:13:06.600 "serial_number": "SPDK00000000000001", 00:13:06.600 "model_number": "SPDK bdev Controller", 00:13:06.600 "max_namespaces": 32, 00:13:06.600 "min_cntlid": 1, 00:13:06.600 "max_cntlid": 65519, 00:13:06.600 "namespaces": [ 00:13:06.600 { 00:13:06.600 "nsid": 1, 00:13:06.600 "bdev_name": "Null1", 00:13:06.600 "name": "Null1", 00:13:06.600 "nguid": "981CEC704447488189DA8245AA00C7B6", 00:13:06.600 "uuid": "981cec70-4447-4881-89da-8245aa00c7b6" 00:13:06.600 } 00:13:06.600 ] 00:13:06.600 }, 00:13:06.600 { 00:13:06.600 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:06.600 "subtype": "NVMe", 00:13:06.600 "listen_addresses": [ 00:13:06.600 { 00:13:06.600 "trtype": "TCP", 00:13:06.600 "adrfam": "IPv4", 00:13:06.600 "traddr": "10.0.0.2", 00:13:06.600 "trsvcid": "4420" 00:13:06.601 } 00:13:06.601 ], 00:13:06.601 "allow_any_host": true, 00:13:06.601 "hosts": [], 00:13:06.601 "serial_number": "SPDK00000000000002", 00:13:06.601 "model_number": "SPDK bdev Controller", 00:13:06.601 "max_namespaces": 32, 00:13:06.601 "min_cntlid": 1, 00:13:06.601 "max_cntlid": 65519, 00:13:06.601 "namespaces": [ 00:13:06.601 { 00:13:06.601 "nsid": 1, 00:13:06.601 "bdev_name": "Null2", 00:13:06.601 "name": "Null2", 00:13:06.601 "nguid": "F9A208179B8C438783AA8F11273381E9", 00:13:06.601 "uuid": "f9a20817-9b8c-4387-83aa-8f11273381e9" 00:13:06.601 } 00:13:06.601 ] 00:13:06.601 }, 00:13:06.601 { 00:13:06.601 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:06.601 "subtype": "NVMe", 00:13:06.601 "listen_addresses": [ 00:13:06.601 { 00:13:06.601 "trtype": "TCP", 00:13:06.601 "adrfam": "IPv4", 00:13:06.601 "traddr": "10.0.0.2", 00:13:06.601 "trsvcid": "4420" 00:13:06.601 } 00:13:06.601 ], 00:13:06.601 "allow_any_host": true, 00:13:06.601 "hosts": [], 00:13:06.601 "serial_number": "SPDK00000000000003", 00:13:06.601 "model_number": "SPDK bdev Controller", 00:13:06.601 "max_namespaces": 32, 00:13:06.601 "min_cntlid": 1, 00:13:06.601 "max_cntlid": 65519, 00:13:06.601 "namespaces": [ 00:13:06.601 { 00:13:06.601 "nsid": 1, 00:13:06.601 "bdev_name": "Null3", 00:13:06.601 "name": "Null3", 00:13:06.601 "nguid": "F1F2670C68B04BD4A54572482CD266B6", 00:13:06.601 "uuid": "f1f2670c-68b0-4bd4-a545-72482cd266b6" 00:13:06.601 } 00:13:06.601 ] 00:13:06.601 }, 00:13:06.601 { 00:13:06.601 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:06.601 "subtype": "NVMe", 00:13:06.601 "listen_addresses": [ 00:13:06.601 { 00:13:06.601 "trtype": "TCP", 00:13:06.601 "adrfam": "IPv4", 00:13:06.601 "traddr": "10.0.0.2", 00:13:06.601 "trsvcid": "4420" 00:13:06.601 } 00:13:06.601 ], 00:13:06.601 "allow_any_host": true, 00:13:06.601 "hosts": [], 00:13:06.601 "serial_number": "SPDK00000000000004", 00:13:06.601 "model_number": "SPDK bdev Controller", 00:13:06.601 "max_namespaces": 32, 00:13:06.601 "min_cntlid": 1, 00:13:06.601 "max_cntlid": 65519, 00:13:06.601 "namespaces": [ 00:13:06.601 { 00:13:06.601 "nsid": 1, 00:13:06.601 "bdev_name": "Null4", 00:13:06.601 "name": "Null4", 00:13:06.601 "nguid": "3D7753C6F0CD40F685B52F06FAFAF6A7", 00:13:06.601 "uuid": "3d7753c6-f0cd-40f6-85b5-2f06fafaf6a7" 00:13:06.601 } 00:13:06.601 ] 00:13:06.601 } 00:13:06.601 ] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.601 rmmod nvme_tcp 00:13:06.601 rmmod nvme_fabrics 00:13:06.601 rmmod nvme_keyring 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3364542 ']' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3364542 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3364542 ']' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3364542 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3364542 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:06.601 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:06.602 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3364542' 00:13:06.602 killing process with pid 3364542 00:13:06.602 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3364542 00:13:06.602 [2024-05-15 00:49:53.619051] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:06.602 00:49:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3364542 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.170 00:49:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.698 00:49:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.698 00:13:09.698 real 0m9.060s 00:13:09.698 user 0m7.042s 00:13:09.698 sys 0m4.099s 00:13:09.698 00:49:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.698 00:49:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 ************************************ 00:13:09.698 END TEST nvmf_target_discovery 00:13:09.698 ************************************ 00:13:09.698 00:49:56 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:09.698 00:49:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:09.698 00:49:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.698 00:49:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 ************************************ 00:13:09.698 START TEST nvmf_referrals 00:13:09.698 ************************************ 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:09.698 * Looking for test storage... 00:13:09.698 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.698 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.699 00:49:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:14.972 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.972 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:14.973 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:14.973 Found net devices under 0000:27:00.0: cvl_0_0 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:14.973 Found net devices under 0000:27:00.1: cvl_0_1 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:13:14.973 00:13:14.973 --- 10.0.0.2 ping statistics --- 00:13:14.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.973 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:13:14.973 00:13:14.973 --- 10.0.0.1 ping statistics --- 00:13:14.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.973 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3368748 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3368748 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3368748 ']' 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.973 00:50:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.973 [2024-05-15 00:50:01.593215] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:14.973 [2024-05-15 00:50:01.593342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.973 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.973 [2024-05-15 00:50:01.736489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.973 [2024-05-15 00:50:01.847577] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.973 [2024-05-15 00:50:01.847619] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.973 [2024-05-15 00:50:01.847629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.973 [2024-05-15 00:50:01.847640] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.973 [2024-05-15 00:50:01.847649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.973 [2024-05-15 00:50:01.847762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.973 [2024-05-15 00:50:01.847853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.973 [2024-05-15 00:50:01.847962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.973 [2024-05-15 00:50:01.847972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 [2024-05-15 00:50:02.336738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 [2024-05-15 00:50:02.352701] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:15.541 [2024-05-15 00:50:02.352965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:15.541 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.542 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.800 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.060 00:50:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.061 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.319 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.576 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.835 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.836 rmmod nvme_tcp 00:13:16.836 rmmod nvme_fabrics 00:13:16.836 rmmod nvme_keyring 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3368748 ']' 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3368748 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3368748 ']' 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3368748 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:16.836 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368748 00:13:17.096 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:17.096 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:17.096 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368748' 00:13:17.096 killing process with pid 3368748 00:13:17.096 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3368748 00:13:17.096 [2024-05-15 00:50:03.908481] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:17.096 00:50:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3368748 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.356 00:50:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.892 00:50:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.892 00:13:19.892 real 0m10.226s 00:13:19.892 user 0m11.705s 00:13:19.892 sys 0m4.443s 00:13:19.892 00:50:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:19.892 00:50:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 ************************************ 00:13:19.892 END TEST nvmf_referrals 00:13:19.892 ************************************ 00:13:19.892 00:50:06 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:19.892 00:50:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:19.892 00:50:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.892 00:50:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 ************************************ 00:13:19.892 START TEST nvmf_connect_disconnect 00:13:19.892 ************************************ 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:19.892 * Looking for test storage... 00:13:19.892 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.892 00:50:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:25.169 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:25.169 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.169 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:25.170 Found net devices under 0000:27:00.0: cvl_0_0 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:25.170 Found net devices under 0000:27:00.1: cvl_0_1 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.170 00:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:13:25.170 00:13:25.170 --- 10.0.0.2 ping statistics --- 00:13:25.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.170 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:25.170 00:13:25.170 --- 10.0.0.1 ping statistics --- 00:13:25.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.170 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3373312 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3373312 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3373312 ']' 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.428 [2024-05-15 00:50:12.234736] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:25.428 [2024-05-15 00:50:12.234843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.428 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.428 [2024-05-15 00:50:12.357702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.428 [2024-05-15 00:50:12.454319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.428 [2024-05-15 00:50:12.454356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.428 [2024-05-15 00:50:12.454365] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.428 [2024-05-15 00:50:12.454374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.428 [2024-05-15 00:50:12.454381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.428 [2024-05-15 00:50:12.454489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.428 [2024-05-15 00:50:12.454579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.428 [2024-05-15 00:50:12.454679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.428 [2024-05-15 00:50:12.454690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.994 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 [2024-05-15 00:50:12.964802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.995 00:50:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 [2024-05-15 00:50:13.032627] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:25.995 [2024-05-15 00:50:13.032875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:25.995 00:50:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:30.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.303 rmmod nvme_tcp 00:13:44.303 rmmod nvme_fabrics 00:13:44.303 rmmod nvme_keyring 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3373312 ']' 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3373312 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3373312 ']' 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3373312 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3373312 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3373312' 00:13:44.303 killing process with pid 3373312 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3373312 00:13:44.303 [2024-05-15 00:50:30.797939] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:44.303 00:50:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3373312 00:13:44.303 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.303 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.303 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.303 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.303 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.304 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.304 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.304 00:50:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.840 00:50:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.840 00:13:46.840 real 0m26.892s 00:13:46.840 user 1m16.120s 00:13:46.840 sys 0m5.043s 00:13:46.840 00:50:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.840 00:50:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 ************************************ 00:13:46.840 END TEST nvmf_connect_disconnect 00:13:46.840 ************************************ 00:13:46.840 00:50:33 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:46.840 00:50:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:46.840 00:50:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.840 00:50:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 ************************************ 00:13:46.840 START TEST nvmf_multitarget 00:13:46.840 ************************************ 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:46.840 * Looking for test storage... 00:13:46.840 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.840 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.841 00:50:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:52.113 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:52.113 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:52.113 Found net devices under 0000:27:00.0: cvl_0_0 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:52.113 Found net devices under 0000:27:00.1: cvl_0_1 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:13:52.113 00:13:52.113 --- 10.0.0.2 ping statistics --- 00:13:52.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.113 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:52.113 00:13:52.113 --- 10.0.0.1 ping statistics --- 00:13:52.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.113 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3381118 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3381118 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3381118 ']' 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.113 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.114 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.114 00:50:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.114 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.114 00:50:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.114 [2024-05-15 00:50:38.962735] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:52.114 [2024-05-15 00:50:38.962835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.114 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.114 [2024-05-15 00:50:39.082421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.375 [2024-05-15 00:50:39.177283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.375 [2024-05-15 00:50:39.177321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.375 [2024-05-15 00:50:39.177331] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.375 [2024-05-15 00:50:39.177343] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.375 [2024-05-15 00:50:39.177350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.375 [2024-05-15 00:50:39.177458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.375 [2024-05-15 00:50:39.177541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.375 [2024-05-15 00:50:39.177640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.375 [2024-05-15 00:50:39.177652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.635 00:50:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.635 00:50:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:13:52.635 00:50:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.635 00:50:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.635 00:50:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:52.892 "nvmf_tgt_1" 00:13:52.892 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:53.150 "nvmf_tgt_2" 00:13:53.150 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.150 00:50:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:53.150 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:53.150 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:53.150 true 00:13:53.150 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:53.150 true 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.408 rmmod nvme_tcp 00:13:53.408 rmmod nvme_fabrics 00:13:53.408 rmmod nvme_keyring 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3381118 ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3381118 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3381118 ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3381118 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3381118 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3381118' 00:13:53.408 killing process with pid 3381118 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3381118 00:13:53.408 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3381118 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.998 00:50:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.955 00:50:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.955 00:13:55.955 real 0m9.453s 00:13:55.955 user 0m8.364s 00:13:55.955 sys 0m4.421s 00:13:55.955 00:50:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:55.955 00:50:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.955 ************************************ 00:13:55.955 END TEST nvmf_multitarget 00:13:55.955 ************************************ 00:13:55.955 00:50:42 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:55.955 00:50:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:55.955 00:50:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.955 00:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.955 ************************************ 00:13:55.955 START TEST nvmf_rpc 00:13:55.955 ************************************ 00:13:55.955 00:50:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:56.214 * Looking for test storage... 00:13:56.214 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.214 00:50:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.215 00:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.842 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:02.843 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:02.843 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:02.843 Found net devices under 0000:27:00.0: cvl_0_0 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:02.843 Found net devices under 0000:27:00.1: cvl_0_1 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:02.843 00:14:02.843 --- 10.0.0.2 ping statistics --- 00:14:02.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.843 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:14:02.843 00:14:02.843 --- 10.0.0.1 ping statistics --- 00:14:02.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.843 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3385615 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3385615 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3385615 ']' 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.843 00:50:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.843 [2024-05-15 00:50:49.389205] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:02.843 [2024-05-15 00:50:49.389311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.843 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.843 [2024-05-15 00:50:49.516696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.843 [2024-05-15 00:50:49.613357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.843 [2024-05-15 00:50:49.613393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.843 [2024-05-15 00:50:49.613402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.843 [2024-05-15 00:50:49.613411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.843 [2024-05-15 00:50:49.613421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.843 [2024-05-15 00:50:49.613609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.843 [2024-05-15 00:50:49.613686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.843 [2024-05-15 00:50:49.613785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.843 [2024-05-15 00:50:49.613798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.102 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:03.362 "tick_rate": 1900000000, 00:14:03.362 "poll_groups": [ 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_000", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_001", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_002", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_003", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [] 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 }' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 [2024-05-15 00:50:50.256944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:03.362 "tick_rate": 1900000000, 00:14:03.362 "poll_groups": [ 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_000", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [ 00:14:03.362 { 00:14:03.362 "trtype": "TCP" 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_001", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [ 00:14:03.362 { 00:14:03.362 "trtype": "TCP" 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_002", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [ 00:14:03.362 { 00:14:03.362 "trtype": "TCP" 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 }, 00:14:03.362 { 00:14:03.362 "name": "nvmf_tgt_poll_group_003", 00:14:03.362 "admin_qpairs": 0, 00:14:03.362 "io_qpairs": 0, 00:14:03.362 "current_admin_qpairs": 0, 00:14:03.362 "current_io_qpairs": 0, 00:14:03.362 "pending_bdev_io": 0, 00:14:03.362 "completed_nvme_io": 0, 00:14:03.362 "transports": [ 00:14:03.362 { 00:14:03.362 "trtype": "TCP" 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 } 00:14:03.362 ] 00:14:03.362 }' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 Malloc1 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.362 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.362 [2024-05-15 00:50:50.421123] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:03.362 [2024-05-15 00:50:50.421424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:14:03.622 [2024-05-15 00:50:50.450796] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:14:03.622 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:03.622 could not add new controller: failed to write to nvme-fabrics device 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.622 00:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.000 00:50:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.000 00:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:05.000 00:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.000 00:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:05.000 00:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:06.931 00:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.190 [2024-05-15 00:50:54.136712] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:14:07.190 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:07.190 could not add new controller: failed to write to nvme-fabrics device 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.190 00:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.566 00:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.566 00:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:08.566 00:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.566 00:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:08.566 00:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.097 [2024-05-15 00:50:57.840145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.097 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.098 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.098 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.098 00:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.098 00:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.472 00:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.472 00:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:12.472 00:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.472 00:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:12.472 00:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:14.377 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 [2024-05-15 00:51:01.522738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.635 00:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.012 00:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.012 00:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:16.012 00:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.012 00:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:16.012 00:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 [2024-05-15 00:51:05.244714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.544 00:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.920 00:51:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.920 00:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:19.920 00:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.920 00:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:19.920 00:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:21.825 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.085 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.086 [2024-05-15 00:51:08.965915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.086 00:51:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.464 00:51:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.464 00:51:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:23.464 00:51:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.464 00:51:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:23.464 00:51:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:25.373 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 [2024-05-15 00:51:12.606114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.712 00:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.093 00:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.093 00:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:27.093 00:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.093 00:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:27.093 00:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:28.996 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 [2024-05-15 00:51:16.284238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.257 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 [2024-05-15 00:51:16.332245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 [2024-05-15 00:51:16.380305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 [2024-05-15 00:51:16.428339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 [2024-05-15 00:51:16.476409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:29.520 "tick_rate": 1900000000, 00:14:29.520 "poll_groups": [ 00:14:29.520 { 00:14:29.520 "name": "nvmf_tgt_poll_group_000", 00:14:29.520 "admin_qpairs": 0, 00:14:29.520 "io_qpairs": 224, 00:14:29.520 "current_admin_qpairs": 0, 00:14:29.520 "current_io_qpairs": 0, 00:14:29.520 "pending_bdev_io": 0, 00:14:29.520 "completed_nvme_io": 373, 00:14:29.520 "transports": [ 00:14:29.520 { 00:14:29.520 "trtype": "TCP" 00:14:29.520 } 00:14:29.520 ] 00:14:29.520 }, 00:14:29.520 { 00:14:29.520 "name": "nvmf_tgt_poll_group_001", 00:14:29.520 "admin_qpairs": 1, 00:14:29.520 "io_qpairs": 223, 00:14:29.520 "current_admin_qpairs": 0, 00:14:29.520 "current_io_qpairs": 0, 00:14:29.520 "pending_bdev_io": 0, 00:14:29.520 "completed_nvme_io": 267, 00:14:29.520 "transports": [ 00:14:29.520 { 00:14:29.520 "trtype": "TCP" 00:14:29.520 } 00:14:29.520 ] 00:14:29.520 }, 00:14:29.520 { 00:14:29.520 "name": "nvmf_tgt_poll_group_002", 00:14:29.520 "admin_qpairs": 6, 00:14:29.520 "io_qpairs": 218, 00:14:29.520 "current_admin_qpairs": 0, 00:14:29.520 "current_io_qpairs": 0, 00:14:29.520 "pending_bdev_io": 0, 00:14:29.520 "completed_nvme_io": 227, 00:14:29.520 "transports": [ 00:14:29.520 { 00:14:29.520 "trtype": "TCP" 00:14:29.520 } 00:14:29.520 ] 00:14:29.520 }, 00:14:29.520 { 00:14:29.520 "name": "nvmf_tgt_poll_group_003", 00:14:29.520 "admin_qpairs": 0, 00:14:29.520 "io_qpairs": 224, 00:14:29.520 "current_admin_qpairs": 0, 00:14:29.520 "current_io_qpairs": 0, 00:14:29.520 "pending_bdev_io": 0, 00:14:29.520 "completed_nvme_io": 372, 00:14:29.520 "transports": [ 00:14:29.520 { 00:14:29.520 "trtype": "TCP" 00:14:29.520 } 00:14:29.520 ] 00:14:29.520 } 00:14:29.520 ] 00:14:29.520 }' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:29.520 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.781 rmmod nvme_tcp 00:14:29.781 rmmod nvme_fabrics 00:14:29.781 rmmod nvme_keyring 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3385615 ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3385615 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3385615 ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3385615 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3385615 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3385615' 00:14:29.781 killing process with pid 3385615 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3385615 00:14:29.781 [2024-05-15 00:51:16.736006] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:29.781 00:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3385615 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.348 00:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.886 00:51:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.886 00:14:32.886 real 0m36.350s 00:14:32.886 user 1m51.426s 00:14:32.886 sys 0m6.033s 00:14:32.886 00:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.886 00:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.886 ************************************ 00:14:32.886 END TEST nvmf_rpc 00:14:32.886 ************************************ 00:14:32.886 00:51:19 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:32.886 00:51:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:32.886 00:51:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.886 00:51:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.886 ************************************ 00:14:32.886 START TEST nvmf_invalid 00:14:32.886 ************************************ 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:32.886 * Looking for test storage... 00:14:32.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.886 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.887 00:51:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:38.161 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:38.161 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:38.161 Found net devices under 0000:27:00.0: cvl_0_0 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:38.161 Found net devices under 0000:27:00.1: cvl_0_1 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.161 00:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:14:38.161 00:14:38.161 --- 10.0.0.2 ping statistics --- 00:14:38.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.161 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:14:38.161 00:14:38.161 --- 10.0.0.1 ping statistics --- 00:14:38.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.161 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3395546 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3395546 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3395546 ']' 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.161 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.162 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.162 [2024-05-15 00:51:25.214139] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:38.162 [2024-05-15 00:51:25.214244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.420 [2024-05-15 00:51:25.334422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.420 [2024-05-15 00:51:25.429860] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.420 [2024-05-15 00:51:25.429897] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.420 [2024-05-15 00:51:25.429906] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.420 [2024-05-15 00:51:25.429916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.420 [2024-05-15 00:51:25.429923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.420 [2024-05-15 00:51:25.430030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.420 [2024-05-15 00:51:25.430113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.420 [2024-05-15 00:51:25.430163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.420 [2024-05-15 00:51:25.430173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:38.989 00:51:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10846 00:14:39.247 [2024-05-15 00:51:26.102520] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:39.247 { 00:14:39.247 "nqn": "nqn.2016-06.io.spdk:cnode10846", 00:14:39.247 "tgt_name": "foobar", 00:14:39.247 "method": "nvmf_create_subsystem", 00:14:39.247 "req_id": 1 00:14:39.247 } 00:14:39.247 Got JSON-RPC error response 00:14:39.247 response: 00:14:39.247 { 00:14:39.247 "code": -32603, 00:14:39.247 "message": "Unable to find target foobar" 00:14:39.247 }' 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:39.247 { 00:14:39.247 "nqn": "nqn.2016-06.io.spdk:cnode10846", 00:14:39.247 "tgt_name": "foobar", 00:14:39.247 "method": "nvmf_create_subsystem", 00:14:39.247 "req_id": 1 00:14:39.247 } 00:14:39.247 Got JSON-RPC error response 00:14:39.247 response: 00:14:39.247 { 00:14:39.247 "code": -32603, 00:14:39.247 "message": "Unable to find target foobar" 00:14:39.247 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22440 00:14:39.247 [2024-05-15 00:51:26.270760] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22440: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:39.247 { 00:14:39.247 "nqn": "nqn.2016-06.io.spdk:cnode22440", 00:14:39.247 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.247 "method": "nvmf_create_subsystem", 00:14:39.247 "req_id": 1 00:14:39.247 } 00:14:39.247 Got JSON-RPC error response 00:14:39.247 response: 00:14:39.247 { 00:14:39.247 "code": -32602, 00:14:39.247 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.247 }' 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:39.247 { 00:14:39.247 "nqn": "nqn.2016-06.io.spdk:cnode22440", 00:14:39.247 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.247 "method": "nvmf_create_subsystem", 00:14:39.247 "req_id": 1 00:14:39.247 } 00:14:39.247 Got JSON-RPC error response 00:14:39.247 response: 00:14:39.247 { 00:14:39.247 "code": -32602, 00:14:39.247 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.247 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:39.247 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31887 00:14:39.505 [2024-05-15 00:51:26.434896] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31887: invalid model number 'SPDK_Controller' 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:39.505 { 00:14:39.505 "nqn": "nqn.2016-06.io.spdk:cnode31887", 00:14:39.505 "model_number": "SPDK_Controller\u001f", 00:14:39.505 "method": "nvmf_create_subsystem", 00:14:39.505 "req_id": 1 00:14:39.505 } 00:14:39.505 Got JSON-RPC error response 00:14:39.505 response: 00:14:39.505 { 00:14:39.505 "code": -32602, 00:14:39.505 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.505 }' 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:39.505 { 00:14:39.505 "nqn": "nqn.2016-06.io.spdk:cnode31887", 00:14:39.505 "model_number": "SPDK_Controller\u001f", 00:14:39.505 "method": "nvmf_create_subsystem", 00:14:39.505 "req_id": 1 00:14:39.505 } 00:14:39.505 Got JSON-RPC error response 00:14:39.505 response: 00:14:39.505 { 00:14:39.505 "code": -32602, 00:14:39.505 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.505 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:39.505 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:14:39.506 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bE}CW-KFq=G'\''v'\''~e lMbT' 00:14:39.784 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bE}CW-KFq=G'\''v'\''~e lMbT' nqn.2016-06.io.spdk:cnode19741 00:14:39.784 [2024-05-15 00:51:26.687249] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19741: invalid serial number 'bE}CW-KFq=G'v'~e lMbT' 00:14:39.784 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:39.784 { 00:14:39.784 "nqn": "nqn.2016-06.io.spdk:cnode19741", 00:14:39.784 "serial_number": "bE}CW-KFq=G'\''v'\''~e lMbT", 00:14:39.784 "method": "nvmf_create_subsystem", 00:14:39.784 "req_id": 1 00:14:39.784 } 00:14:39.784 Got JSON-RPC error response 00:14:39.784 response: 00:14:39.784 { 00:14:39.784 "code": -32602, 00:14:39.784 "message": "Invalid SN bE}CW-KFq=G'\''v'\''~e lMbT" 00:14:39.784 }' 00:14:39.784 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:39.784 { 00:14:39.784 "nqn": "nqn.2016-06.io.spdk:cnode19741", 00:14:39.784 "serial_number": "bE}CW-KFq=G'v'~e lMbT", 00:14:39.784 "method": "nvmf_create_subsystem", 00:14:39.784 "req_id": 1 00:14:39.784 } 00:14:39.784 Got JSON-RPC error response 00:14:39.784 response: 00:14:39.784 { 00:14:39.784 "code": -32602, 00:14:39.784 "message": "Invalid SN bE}CW-KFq=G'v'~e lMbT" 00:14:39.784 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Wn>XG3UHgY2*c:n.v<K&XWHu;;|Ii8RA0}}#z)0*' 00:14:40.047 00:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Wn>XG3UHgY2*c:n.v<K&XWHu;;|Ii8RA0}}#z)0*' nqn.2016-06.io.spdk:cnode9709 00:14:40.047 [2024-05-15 00:51:27.055659] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9709: invalid model number 'Wn>XG3UHgY2*c:n.v<K&XWHu;;|Ii8RA0}}#z)0*' 00:14:40.047 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:40.047 { 00:14:40.047 "nqn": "nqn.2016-06.io.spdk:cnode9709", 00:14:40.047 "model_number": "Wn>XG3UHgY2*c:n.v<\u007fK&XWHu;;|Ii8RA0}}#z)0*", 00:14:40.047 "method": "nvmf_create_subsystem", 00:14:40.047 "req_id": 1 00:14:40.047 } 00:14:40.047 Got JSON-RPC error response 00:14:40.047 response: 00:14:40.047 { 00:14:40.047 "code": -32602, 00:14:40.047 "message": "Invalid MN Wn>XG3UHgY2*c:n.v<\u007fK&XWHu;;|Ii8RA0}}#z)0*" 00:14:40.047 }' 00:14:40.047 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:40.047 { 00:14:40.047 "nqn": "nqn.2016-06.io.spdk:cnode9709", 00:14:40.047 "model_number": "Wn>XG3UHgY2*c:n.v<\u007fK&XWHu;;|Ii8RA0}}#z)0*", 00:14:40.047 "method": "nvmf_create_subsystem", 00:14:40.047 "req_id": 1 00:14:40.047 } 00:14:40.047 Got JSON-RPC error response 00:14:40.047 response: 00:14:40.047 { 00:14:40.047 "code": -32602, 00:14:40.047 "message": "Invalid MN Wn>XG3UHgY2*c:n.v<\u007fK&XWHu;;|Ii8RA0}}#z)0*" 00:14:40.047 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:40.047 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:40.308 [2024-05-15 00:51:27.211917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.308 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:40.567 [2024-05-15 00:51:27.540222] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:40.567 [2024-05-15 00:51:27.540317] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:40.567 { 00:14:40.567 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:40.567 "listen_address": { 00:14:40.567 "trtype": "tcp", 00:14:40.567 "traddr": "", 00:14:40.567 "trsvcid": "4421" 00:14:40.567 }, 00:14:40.567 "method": "nvmf_subsystem_remove_listener", 00:14:40.567 "req_id": 1 00:14:40.567 } 00:14:40.567 Got JSON-RPC error response 00:14:40.567 response: 00:14:40.567 { 00:14:40.567 "code": -32602, 00:14:40.567 "message": "Invalid parameters" 00:14:40.567 }' 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:40.567 { 00:14:40.567 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:40.567 "listen_address": { 00:14:40.567 "trtype": "tcp", 00:14:40.567 "traddr": "", 00:14:40.567 "trsvcid": "4421" 00:14:40.567 }, 00:14:40.567 "method": "nvmf_subsystem_remove_listener", 00:14:40.567 "req_id": 1 00:14:40.567 } 00:14:40.567 Got JSON-RPC error response 00:14:40.567 response: 00:14:40.567 { 00:14:40.567 "code": -32602, 00:14:40.567 "message": "Invalid parameters" 00:14:40.567 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:40.567 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22183 -i 0 00:14:40.825 [2024-05-15 00:51:27.704374] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22183: invalid cntlid range [0-65519] 00:14:40.825 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:40.825 { 00:14:40.825 "nqn": "nqn.2016-06.io.spdk:cnode22183", 00:14:40.825 "min_cntlid": 0, 00:14:40.825 "method": "nvmf_create_subsystem", 00:14:40.825 "req_id": 1 00:14:40.825 } 00:14:40.825 Got JSON-RPC error response 00:14:40.825 response: 00:14:40.825 { 00:14:40.825 "code": -32602, 00:14:40.825 "message": "Invalid cntlid range [0-65519]" 00:14:40.825 }' 00:14:40.825 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:40.825 { 00:14:40.825 "nqn": "nqn.2016-06.io.spdk:cnode22183", 00:14:40.825 "min_cntlid": 0, 00:14:40.825 "method": "nvmf_create_subsystem", 00:14:40.825 "req_id": 1 00:14:40.825 } 00:14:40.825 Got JSON-RPC error response 00:14:40.825 response: 00:14:40.825 { 00:14:40.825 "code": -32602, 00:14:40.825 "message": "Invalid cntlid range [0-65519]" 00:14:40.826 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:40.826 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11616 -i 65520 00:14:40.826 [2024-05-15 00:51:27.880573] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11616: invalid cntlid range [65520-65519] 00:14:41.084 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:41.084 { 00:14:41.084 "nqn": "nqn.2016-06.io.spdk:cnode11616", 00:14:41.084 "min_cntlid": 65520, 00:14:41.084 "method": "nvmf_create_subsystem", 00:14:41.084 "req_id": 1 00:14:41.084 } 00:14:41.084 Got JSON-RPC error response 00:14:41.084 response: 00:14:41.084 { 00:14:41.084 "code": -32602, 00:14:41.084 "message": "Invalid cntlid range [65520-65519]" 00:14:41.084 }' 00:14:41.084 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:41.084 { 00:14:41.084 "nqn": "nqn.2016-06.io.spdk:cnode11616", 00:14:41.084 "min_cntlid": 65520, 00:14:41.084 "method": "nvmf_create_subsystem", 00:14:41.084 "req_id": 1 00:14:41.084 } 00:14:41.084 Got JSON-RPC error response 00:14:41.084 response: 00:14:41.084 { 00:14:41.084 "code": -32602, 00:14:41.084 "message": "Invalid cntlid range [65520-65519]" 00:14:41.084 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:41.084 00:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25796 -I 0 00:14:41.084 [2024-05-15 00:51:28.032750] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25796: invalid cntlid range [1-0] 00:14:41.084 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:41.084 { 00:14:41.084 "nqn": "nqn.2016-06.io.spdk:cnode25796", 00:14:41.084 "max_cntlid": 0, 00:14:41.084 "method": "nvmf_create_subsystem", 00:14:41.084 "req_id": 1 00:14:41.084 } 00:14:41.084 Got JSON-RPC error response 00:14:41.084 response: 00:14:41.084 { 00:14:41.084 "code": -32602, 00:14:41.084 "message": "Invalid cntlid range [1-0]" 00:14:41.084 }' 00:14:41.084 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:41.084 { 00:14:41.084 "nqn": "nqn.2016-06.io.spdk:cnode25796", 00:14:41.084 "max_cntlid": 0, 00:14:41.084 "method": "nvmf_create_subsystem", 00:14:41.084 "req_id": 1 00:14:41.084 } 00:14:41.084 Got JSON-RPC error response 00:14:41.084 response: 00:14:41.084 { 00:14:41.084 "code": -32602, 00:14:41.084 "message": "Invalid cntlid range [1-0]" 00:14:41.084 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:41.084 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7048 -I 65520 00:14:41.344 [2024-05-15 00:51:28.176908] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7048: invalid cntlid range [1-65520] 00:14:41.344 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:41.344 { 00:14:41.344 "nqn": "nqn.2016-06.io.spdk:cnode7048", 00:14:41.344 "max_cntlid": 65520, 00:14:41.344 "method": "nvmf_create_subsystem", 00:14:41.344 "req_id": 1 00:14:41.344 } 00:14:41.344 Got JSON-RPC error response 00:14:41.344 response: 00:14:41.344 { 00:14:41.345 "code": -32602, 00:14:41.345 "message": "Invalid cntlid range [1-65520]" 00:14:41.345 }' 00:14:41.345 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:41.345 { 00:14:41.345 "nqn": "nqn.2016-06.io.spdk:cnode7048", 00:14:41.345 "max_cntlid": 65520, 00:14:41.345 "method": "nvmf_create_subsystem", 00:14:41.345 "req_id": 1 00:14:41.345 } 00:14:41.345 Got JSON-RPC error response 00:14:41.345 response: 00:14:41.345 { 00:14:41.345 "code": -32602, 00:14:41.345 "message": "Invalid cntlid range [1-65520]" 00:14:41.345 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:41.345 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20088 -i 6 -I 5 00:14:41.345 [2024-05-15 00:51:28.317068] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20088: invalid cntlid range [6-5] 00:14:41.345 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:41.345 { 00:14:41.345 "nqn": "nqn.2016-06.io.spdk:cnode20088", 00:14:41.345 "min_cntlid": 6, 00:14:41.345 "max_cntlid": 5, 00:14:41.345 "method": "nvmf_create_subsystem", 00:14:41.345 "req_id": 1 00:14:41.345 } 00:14:41.345 Got JSON-RPC error response 00:14:41.345 response: 00:14:41.345 { 00:14:41.345 "code": -32602, 00:14:41.345 "message": "Invalid cntlid range [6-5]" 00:14:41.345 }' 00:14:41.345 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:41.345 { 00:14:41.345 "nqn": "nqn.2016-06.io.spdk:cnode20088", 00:14:41.345 "min_cntlid": 6, 00:14:41.345 "max_cntlid": 5, 00:14:41.345 "method": "nvmf_create_subsystem", 00:14:41.345 "req_id": 1 00:14:41.345 } 00:14:41.345 Got JSON-RPC error response 00:14:41.345 response: 00:14:41.345 { 00:14:41.345 "code": -32602, 00:14:41.345 "message": "Invalid cntlid range [6-5]" 00:14:41.345 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:41.345 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:41.604 { 00:14:41.604 "name": "foobar", 00:14:41.604 "method": "nvmf_delete_target", 00:14:41.604 "req_id": 1 00:14:41.604 } 00:14:41.604 Got JSON-RPC error response 00:14:41.604 response: 00:14:41.604 { 00:14:41.604 "code": -32602, 00:14:41.604 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:41.604 }' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:41.604 { 00:14:41.604 "name": "foobar", 00:14:41.604 "method": "nvmf_delete_target", 00:14:41.604 "req_id": 1 00:14:41.604 } 00:14:41.604 Got JSON-RPC error response 00:14:41.604 response: 00:14:41.604 { 00:14:41.604 "code": -32602, 00:14:41.604 "message": "The specified target doesn't exist, cannot delete it." 00:14:41.604 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.604 rmmod nvme_tcp 00:14:41.604 rmmod nvme_fabrics 00:14:41.604 rmmod nvme_keyring 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3395546 ']' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3395546 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3395546 ']' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3395546 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3395546 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3395546' 00:14:41.604 killing process with pid 3395546 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3395546 00:14:41.604 [2024-05-15 00:51:28.520658] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:41.604 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3395546 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.176 00:51:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.082 00:51:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.082 00:14:44.082 real 0m11.649s 00:14:44.082 user 0m17.178s 00:14:44.082 sys 0m5.153s 00:14:44.082 00:51:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.082 00:51:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.082 ************************************ 00:14:44.082 END TEST nvmf_invalid 00:14:44.082 ************************************ 00:14:44.082 00:51:31 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:44.082 00:51:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:44.082 00:51:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.082 00:51:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.082 ************************************ 00:14:44.082 START TEST nvmf_abort 00:14:44.082 ************************************ 00:14:44.082 00:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:44.340 * Looking for test storage... 00:14:44.340 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.340 00:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:49.615 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:49.616 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:49.616 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:49.616 Found net devices under 0000:27:00.0: cvl_0_0 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:49.616 Found net devices under 0000:27:00.1: cvl_0_1 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.616 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:14:49.876 00:14:49.876 --- 10.0.0.2 ping statistics --- 00:14:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.876 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:14:49.876 00:14:49.876 --- 10.0.0.1 ping statistics --- 00:14:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.876 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3400298 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3400298 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3400298 ']' 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.876 00:51:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:49.876 [2024-05-15 00:51:36.881351] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:49.876 [2024-05-15 00:51:36.881426] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.876 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.168 [2024-05-15 00:51:37.003341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.168 [2024-05-15 00:51:37.163422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.168 [2024-05-15 00:51:37.163474] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.168 [2024-05-15 00:51:37.163490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.168 [2024-05-15 00:51:37.163504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.168 [2024-05-15 00:51:37.163515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.168 [2024-05-15 00:51:37.163676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.168 [2024-05-15 00:51:37.163787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.168 [2024-05-15 00:51:37.163795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.734 [2024-05-15 00:51:37.644150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.734 Malloc0 00:14:50.734 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.735 Delay0 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.735 [2024-05-15 00:51:37.746760] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:50.735 [2024-05-15 00:51:37.747093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.735 00:51:37 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:50.996 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.996 [2024-05-15 00:51:37.889396] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:53.532 Initializing NVMe Controllers 00:14:53.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:53.532 controller IO queue size 128 less than required 00:14:53.532 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:53.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:53.532 Initialization complete. Launching workers. 00:14:53.532 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 48783 00:14:53.532 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 48848, failed to submit 62 00:14:53.532 success 48787, unsuccess 61, failed 0 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.532 00:51:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.532 rmmod nvme_tcp 00:14:53.532 rmmod nvme_fabrics 00:14:53.532 rmmod nvme_keyring 00:14:53.532 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.532 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:53.532 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:53.532 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3400298 ']' 00:14:53.532 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3400298 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3400298 ']' 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3400298 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400298 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400298' 00:14:53.533 killing process with pid 3400298 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3400298 00:14:53.533 [2024-05-15 00:51:40.115526] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.533 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3400298 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.792 00:51:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.328 00:51:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.328 00:14:56.328 real 0m11.672s 00:14:56.329 user 0m13.951s 00:14:56.329 sys 0m4.836s 00:14:56.329 00:51:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.329 00:51:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:56.329 ************************************ 00:14:56.329 END TEST nvmf_abort 00:14:56.329 ************************************ 00:14:56.329 00:51:42 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:56.329 00:51:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.329 00:51:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.329 00:51:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.329 ************************************ 00:14:56.329 START TEST nvmf_ns_hotplug_stress 00:14:56.329 ************************************ 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:56.329 * Looking for test storage... 00:14:56.329 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.329 00:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:01.620 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:01.620 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:01.620 Found net devices under 0000:27:00.0: cvl_0_0 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:01.620 Found net devices under 0000:27:00.1: cvl_0_1 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.620 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:15:01.621 00:15:01.621 --- 10.0.0.2 ping statistics --- 00:15:01.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.621 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:15:01.621 00:15:01.621 --- 10.0.0.1 ping statistics --- 00:15:01.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.621 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.621 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3404977 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3404977 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3404977 ']' 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.881 00:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:01.881 [2024-05-15 00:51:48.744966] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:01.881 [2024-05-15 00:51:48.745036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.881 [2024-05-15 00:51:48.860909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.141 [2024-05-15 00:51:49.019978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.141 [2024-05-15 00:51:49.020033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.141 [2024-05-15 00:51:49.020057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.141 [2024-05-15 00:51:49.020077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.141 [2024-05-15 00:51:49.020090] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.141 [2024-05-15 00:51:49.020246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.141 [2024-05-15 00:51:49.020372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.141 [2024-05-15 00:51:49.020380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:02.712 [2024-05-15 00:51:49.639184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.712 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.971 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.971 [2024-05-15 00:51:49.936593] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:02.971 [2024-05-15 00:51:49.936933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.971 00:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.231 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:03.231 Malloc0 00:15:03.231 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:03.491 Delay0 00:15:03.491 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.751 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:03.751 NULL1 00:15:03.751 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:04.009 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3405505 00:15:04.009 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:04.009 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.009 00:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:04.009 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.009 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.266 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:04.266 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:04.266 [2024-05-15 00:51:51.327015] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:15:04.524 true 00:15:04.524 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:04.524 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.524 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.785 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:04.785 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:04.785 true 00:15:04.785 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:04.785 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.045 00:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.304 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:05.304 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:05.304 true 00:15:05.304 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:05.304 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.562 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.562 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:05.562 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:05.821 true 00:15:05.821 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:05.821 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.083 00:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.083 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:06.083 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:06.344 true 00:15:06.344 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:06.344 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.344 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.610 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:06.610 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:06.869 true 00:15:06.869 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:06.869 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.869 00:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.129 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:07.129 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:07.129 true 00:15:07.387 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:07.387 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.387 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.648 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:07.648 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:07.648 true 00:15:07.648 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:07.648 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.908 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.168 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:08.168 00:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:08.168 true 00:15:08.168 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:08.168 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.428 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.428 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:08.428 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:08.688 true 00:15:08.688 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:08.689 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.948 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.948 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:08.948 00:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:09.208 true 00:15:09.208 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:09.208 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.208 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.468 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:09.468 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:09.729 true 00:15:09.729 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:09.729 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.729 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.989 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:09.989 00:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:09.989 true 00:15:09.989 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:09.989 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.249 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.509 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:10.509 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:10.509 true 00:15:10.509 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:10.509 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.769 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.030 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:11.030 00:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:11.030 true 00:15:11.030 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:11.030 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.325 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.325 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:11.325 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:11.608 true 00:15:11.608 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:11.608 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.608 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.866 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:11.866 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:12.126 true 00:15:12.126 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:12.126 00:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.126 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.386 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:12.386 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:12.386 true 00:15:12.386 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:12.386 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.647 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.906 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:12.906 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:12.906 true 00:15:12.906 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:12.906 00:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.163 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.422 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:13.422 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:13.422 true 00:15:13.422 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:13.422 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.681 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.681 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:13.681 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:13.941 true 00:15:13.941 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:13.941 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.941 00:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.202 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:14.202 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:14.462 true 00:15:14.462 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:14.462 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.462 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.721 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:14.721 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:14.721 true 00:15:14.981 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:14.981 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.981 00:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.240 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:15.240 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:15.240 true 00:15:15.240 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:15.240 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.499 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.499 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:15.500 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:15.759 true 00:15:15.759 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:15.759 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.018 00:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.018 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:16.018 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:16.277 true 00:15:16.277 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:16.277 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.537 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.537 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:16.537 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:16.796 true 00:15:16.796 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:16.796 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.796 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.056 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:17.056 00:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:17.056 true 00:15:17.056 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:17.056 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.317 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.577 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:17.577 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:17.577 true 00:15:17.577 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:17.577 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.835 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.836 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:17.836 00:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:18.093 true 00:15:18.093 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:18.093 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.351 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.351 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:18.351 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:18.610 true 00:15:18.610 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:18.610 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.610 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.871 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:18.871 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:18.871 true 00:15:18.871 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:18.871 00:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.130 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.388 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:19.388 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:19.388 true 00:15:19.388 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:19.388 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.646 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.904 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:19.904 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:19.904 true 00:15:19.904 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:19.904 00:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.165 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.165 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:20.165 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:20.425 true 00:15:20.425 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:20.425 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.425 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.685 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:20.685 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:20.945 true 00:15:20.945 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:20.945 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.945 00:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.204 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:15:21.204 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:15:21.204 true 00:15:21.204 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:21.204 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.462 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.721 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:15:21.721 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:15:21.721 true 00:15:21.721 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:21.721 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.980 00:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.980 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:15:21.980 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:15:22.239 true 00:15:22.239 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:22.239 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.498 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.498 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:15:22.498 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:15:22.757 true 00:15:22.757 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:22.757 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.757 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.015 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:15:23.015 00:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:15:23.015 true 00:15:23.274 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:23.274 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.274 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.534 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:15:23.534 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:15:23.534 true 00:15:23.534 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:23.534 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.793 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.053 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:15:24.053 00:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:15:24.053 true 00:15:24.053 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:24.053 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.313 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.313 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:24.313 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:24.573 true 00:15:24.573 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:24.573 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.834 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.834 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:24.834 00:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:25.095 true 00:15:25.095 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:25.095 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.355 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.355 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:25.355 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:25.613 true 00:15:25.613 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:25.613 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.613 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.872 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:25.872 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:26.131 true 00:15:26.131 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:26.131 00:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.131 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.392 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:26.392 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:26.392 true 00:15:26.392 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:26.392 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.651 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.948 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:26.948 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:26.948 true 00:15:26.948 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:26.948 00:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.206 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.206 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:27.206 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:27.464 true 00:15:27.464 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:27.464 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.724 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.724 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:15:27.724 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:15:27.984 true 00:15:27.984 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:27.984 00:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.984 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.243 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:15:28.243 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:15:28.243 true 00:15:28.502 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:28.502 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.502 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.760 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:15:28.760 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:15:28.760 true 00:15:28.760 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:28.760 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.019 00:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.278 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:15:29.278 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:15:29.278 true 00:15:29.278 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:29.278 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.537 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.537 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:15:29.537 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:15:29.796 true 00:15:29.796 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:29.796 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.054 00:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.054 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:15:30.054 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:15:30.311 true 00:15:30.311 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:30.311 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.311 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.568 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:15:30.568 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:15:30.826 true 00:15:30.826 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:30.826 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.826 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.085 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:15:31.085 00:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:15:31.085 true 00:15:31.085 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:31.085 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.344 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.602 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:15:31.602 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:15:31.602 true 00:15:31.602 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:31.602 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.859 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.117 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:15:32.117 00:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:15:32.117 true 00:15:32.117 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:32.117 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.375 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.375 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:15:32.375 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:15:32.634 true 00:15:32.634 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:32.634 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.892 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.892 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:15:32.892 00:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:15:33.150 true 00:15:33.150 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:33.150 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.150 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.406 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:15:33.406 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:15:33.663 true 00:15:33.663 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:33.663 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.663 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.921 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1064 00:15:33.921 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:15:33.921 true 00:15:33.921 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:33.921 00:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.180 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.180 Initializing NVMe Controllers 00:15:34.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.180 Controller IO queue size 128, less than required. 00:15:34.180 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:34.180 Initialization complete. Launching workers. 00:15:34.180 ======================================================== 00:15:34.180 Latency(us) 00:15:34.180 Device Information : IOPS MiB/s Average min max 00:15:34.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27816.30 13.58 4601.68 3026.03 43669.41 00:15:34.180 ======================================================== 00:15:34.180 Total : 27816.30 13.58 4601.68 3026.03 43669.41 00:15:34.180 00:15:34.439 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1065 00:15:34.439 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:15:34.439 true 00:15:34.440 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3405505 00:15:34.440 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3405505) - No such process 00:15:34.440 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3405505 00:15:34.440 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:34.699 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:34.957 null0 00:15:34.957 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:34.957 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:34.957 00:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:34.957 null1 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:35.215 null2 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.215 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:35.474 null3 00:15:35.474 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.475 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.475 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:35.475 null4 00:15:35.475 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.475 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.475 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:35.735 null5 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:35.735 null6 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.735 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:35.997 null7 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3411696 3411697 3411698 3411700 3411702 3411704 3411706 3411707 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:35.997 00:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:35.997 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:35.997 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.258 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.519 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.779 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:36.780 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:36.780 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:36.780 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:37.040 00:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.040 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:37.301 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.302 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.562 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:37.563 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:37.822 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.082 00:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.082 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.343 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.604 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:38.604 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.605 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:38.866 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.867 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:39.128 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:39.128 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:39.128 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.128 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:39.129 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:39.129 00:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.129 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:39.390 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.650 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.911 rmmod nvme_tcp 00:15:39.911 rmmod nvme_fabrics 00:15:39.911 rmmod nvme_keyring 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3404977 ']' 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3404977 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3404977 ']' 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3404977 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3404977 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3404977' 00:15:39.911 killing process with pid 3404977 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3404977 00:15:39.911 [2024-05-15 00:52:26.853957] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:39.911 00:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3404977 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.482 00:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.389 00:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:42.390 00:15:42.390 real 0m46.550s 00:15:42.390 user 3m12.991s 00:15:42.390 sys 0m16.285s 00:15:42.390 00:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:42.390 00:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.390 ************************************ 00:15:42.390 END TEST nvmf_ns_hotplug_stress 00:15:42.390 ************************************ 00:15:42.390 00:52:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:42.390 00:52:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:42.390 00:52:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:42.390 00:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.649 ************************************ 00:15:42.649 START TEST nvmf_connect_stress 00:15:42.649 ************************************ 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:42.649 * Looking for test storage... 00:15:42.649 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:42.649 00:52:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:42.650 00:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:47.963 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:47.963 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.963 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:47.964 Found net devices under 0000:27:00.0: cvl_0_0 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:47.964 Found net devices under 0000:27:00.1: cvl_0_1 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:15:47.964 00:15:47.964 --- 10.0.0.2 ping statistics --- 00:15:47.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.964 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:15:47.964 00:15:47.964 --- 10.0.0.1 ping statistics --- 00:15:47.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.964 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3416609 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3416609 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3416609 ']' 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.964 00:52:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:47.964 [2024-05-15 00:52:34.900735] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:47.964 [2024-05-15 00:52:34.900866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.225 [2024-05-15 00:52:35.072678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.225 [2024-05-15 00:52:35.256519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.225 [2024-05-15 00:52:35.256597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.225 [2024-05-15 00:52:35.256615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.225 [2024-05-15 00:52:35.256632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.225 [2024-05-15 00:52:35.256646] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.225 [2024-05-15 00:52:35.256848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.225 [2024-05-15 00:52:35.256979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.225 [2024-05-15 00:52:35.256995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.795 [2024-05-15 00:52:35.664434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.795 [2024-05-15 00:52:35.701290] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:48.795 [2024-05-15 00:52:35.701640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.795 NULL1 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3416889 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.795 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.796 00:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.053 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.053 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:49.053 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.053 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.053 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.622 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.622 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:49.622 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.622 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.622 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.881 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.881 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:49.881 00:52:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.881 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.881 00:52:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.138 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.138 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:50.138 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.138 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.138 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.396 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.396 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:50.396 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.396 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.396 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.654 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.654 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:50.654 00:52:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.654 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.654 00:52:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.222 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.222 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:51.222 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.222 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.222 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.480 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.480 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:51.480 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.480 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.480 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.738 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.738 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:51.738 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.738 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.738 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.998 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:51.998 00:52:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.998 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.998 00:52:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.258 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.258 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:52.258 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.258 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.258 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.828 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.829 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:52.829 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.829 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.829 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.086 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:53.086 00:52:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.086 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.086 00:52:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.346 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.346 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:53.346 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.346 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.346 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.605 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.605 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:53.605 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.605 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.605 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.865 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.865 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:53.865 00:52:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.865 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.865 00:52:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.435 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.435 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:54.435 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.435 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.435 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.693 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:54.693 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.693 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.693 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.951 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.951 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:54.951 00:52:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.951 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.951 00:52:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.210 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.210 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:55.210 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.210 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.210 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.471 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.471 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:55.471 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.471 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.471 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.039 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.039 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:56.039 00:52:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.039 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.039 00:52:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.299 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.299 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:56.299 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.299 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.299 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.556 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.556 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:56.556 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.556 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.556 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.815 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.815 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:56.815 00:52:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.815 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.815 00:52:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.076 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.076 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:57.076 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.076 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.076 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.645 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.645 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:57.645 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.645 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.645 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.903 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:57.903 00:52:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.903 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.903 00:52:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.161 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.161 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:58.161 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.161 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.161 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.422 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.422 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:58.422 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.422 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.422 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.682 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.682 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:58.682 00:52:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.682 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.682 00:52:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.943 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3416889 00:15:59.204 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3416889) - No such process 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3416889 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.204 rmmod nvme_tcp 00:15:59.204 rmmod nvme_fabrics 00:15:59.204 rmmod nvme_keyring 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3416609 ']' 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3416609 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3416609 ']' 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3416609 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3416609 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3416609' 00:15:59.204 killing process with pid 3416609 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3416609 00:15:59.204 [2024-05-15 00:52:46.172633] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:59.204 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3416609 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.771 00:52:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.679 00:52:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.679 00:16:01.679 real 0m19.236s 00:16:01.679 user 0m43.474s 00:16:01.679 sys 0m5.747s 00:16:01.679 00:52:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.679 00:52:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 ************************************ 00:16:01.679 END TEST nvmf_connect_stress 00:16:01.679 ************************************ 00:16:01.679 00:52:48 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:01.679 00:52:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:01.679 00:52:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.679 00:52:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.940 ************************************ 00:16:01.940 START TEST nvmf_fused_ordering 00:16:01.940 ************************************ 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:01.940 * Looking for test storage... 00:16:01.940 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.940 00:52:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.941 00:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:07.284 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:07.284 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:07.284 Found net devices under 0000:27:00.0: cvl_0_0 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:07.284 Found net devices under 0000:27:00.1: cvl_0_1 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.284 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:07.285 00:16:07.285 --- 10.0.0.2 ping statistics --- 00:16:07.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.285 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:16:07.285 00:16:07.285 --- 10.0.0.1 ping statistics --- 00:16:07.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.285 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3422634 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3422634 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3422634 ']' 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.285 00:52:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.285 [2024-05-15 00:52:54.055166] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:07.285 [2024-05-15 00:52:54.055274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.285 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.285 [2024-05-15 00:52:54.181917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.285 [2024-05-15 00:52:54.290874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.285 [2024-05-15 00:52:54.290925] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.285 [2024-05-15 00:52:54.290937] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.285 [2024-05-15 00:52:54.290950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.285 [2024-05-15 00:52:54.290960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.285 [2024-05-15 00:52:54.291001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 [2024-05-15 00:52:54.764684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 [2024-05-15 00:52:54.780627] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:07.851 [2024-05-15 00:52:54.780891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 NULL1 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.851 00:52:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:07.851 [2024-05-15 00:52:54.832981] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:07.851 [2024-05-15 00:52:54.833026] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422901 ] 00:16:07.851 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.418 Attached to nqn.2016-06.io.spdk:cnode1 00:16:08.418 Namespace ID: 1 size: 1GB 00:16:08.418 fused_ordering(0) 00:16:08.418 fused_ordering(1) 00:16:08.418 fused_ordering(2) 00:16:08.418 fused_ordering(3) 00:16:08.418 fused_ordering(4) 00:16:08.418 fused_ordering(5) 00:16:08.418 fused_ordering(6) 00:16:08.418 fused_ordering(7) 00:16:08.418 fused_ordering(8) 00:16:08.418 fused_ordering(9) 00:16:08.418 fused_ordering(10) 00:16:08.418 fused_ordering(11) 00:16:08.418 fused_ordering(12) 00:16:08.418 fused_ordering(13) 00:16:08.418 fused_ordering(14) 00:16:08.418 fused_ordering(15) 00:16:08.418 fused_ordering(16) 00:16:08.418 fused_ordering(17) 00:16:08.418 fused_ordering(18) 00:16:08.418 fused_ordering(19) 00:16:08.418 fused_ordering(20) 00:16:08.418 fused_ordering(21) 00:16:08.418 fused_ordering(22) 00:16:08.418 fused_ordering(23) 00:16:08.418 fused_ordering(24) 00:16:08.418 fused_ordering(25) 00:16:08.418 fused_ordering(26) 00:16:08.418 fused_ordering(27) 00:16:08.418 fused_ordering(28) 00:16:08.418 fused_ordering(29) 00:16:08.418 fused_ordering(30) 00:16:08.418 fused_ordering(31) 00:16:08.418 fused_ordering(32) 00:16:08.418 fused_ordering(33) 00:16:08.418 fused_ordering(34) 00:16:08.418 fused_ordering(35) 00:16:08.418 fused_ordering(36) 00:16:08.418 fused_ordering(37) 00:16:08.418 fused_ordering(38) 00:16:08.418 fused_ordering(39) 00:16:08.418 fused_ordering(40) 00:16:08.418 fused_ordering(41) 00:16:08.418 fused_ordering(42) 00:16:08.418 fused_ordering(43) 00:16:08.418 fused_ordering(44) 00:16:08.418 fused_ordering(45) 00:16:08.418 fused_ordering(46) 00:16:08.418 fused_ordering(47) 00:16:08.418 fused_ordering(48) 00:16:08.418 fused_ordering(49) 00:16:08.418 fused_ordering(50) 00:16:08.418 fused_ordering(51) 00:16:08.418 fused_ordering(52) 00:16:08.418 fused_ordering(53) 00:16:08.418 fused_ordering(54) 00:16:08.418 fused_ordering(55) 00:16:08.418 fused_ordering(56) 00:16:08.418 fused_ordering(57) 00:16:08.418 fused_ordering(58) 00:16:08.418 fused_ordering(59) 00:16:08.418 fused_ordering(60) 00:16:08.418 fused_ordering(61) 00:16:08.418 fused_ordering(62) 00:16:08.418 fused_ordering(63) 00:16:08.418 fused_ordering(64) 00:16:08.418 fused_ordering(65) 00:16:08.418 fused_ordering(66) 00:16:08.418 fused_ordering(67) 00:16:08.418 fused_ordering(68) 00:16:08.418 fused_ordering(69) 00:16:08.418 fused_ordering(70) 00:16:08.418 fused_ordering(71) 00:16:08.418 fused_ordering(72) 00:16:08.418 fused_ordering(73) 00:16:08.418 fused_ordering(74) 00:16:08.418 fused_ordering(75) 00:16:08.418 fused_ordering(76) 00:16:08.418 fused_ordering(77) 00:16:08.418 fused_ordering(78) 00:16:08.418 fused_ordering(79) 00:16:08.418 fused_ordering(80) 00:16:08.418 fused_ordering(81) 00:16:08.418 fused_ordering(82) 00:16:08.418 fused_ordering(83) 00:16:08.418 fused_ordering(84) 00:16:08.418 fused_ordering(85) 00:16:08.418 fused_ordering(86) 00:16:08.418 fused_ordering(87) 00:16:08.418 fused_ordering(88) 00:16:08.418 fused_ordering(89) 00:16:08.418 fused_ordering(90) 00:16:08.418 fused_ordering(91) 00:16:08.418 fused_ordering(92) 00:16:08.418 fused_ordering(93) 00:16:08.418 fused_ordering(94) 00:16:08.418 fused_ordering(95) 00:16:08.418 fused_ordering(96) 00:16:08.418 fused_ordering(97) 00:16:08.418 fused_ordering(98) 00:16:08.418 fused_ordering(99) 00:16:08.418 fused_ordering(100) 00:16:08.418 fused_ordering(101) 00:16:08.418 fused_ordering(102) 00:16:08.418 fused_ordering(103) 00:16:08.418 fused_ordering(104) 00:16:08.418 fused_ordering(105) 00:16:08.418 fused_ordering(106) 00:16:08.418 fused_ordering(107) 00:16:08.418 fused_ordering(108) 00:16:08.418 fused_ordering(109) 00:16:08.418 fused_ordering(110) 00:16:08.418 fused_ordering(111) 00:16:08.418 fused_ordering(112) 00:16:08.418 fused_ordering(113) 00:16:08.418 fused_ordering(114) 00:16:08.418 fused_ordering(115) 00:16:08.418 fused_ordering(116) 00:16:08.418 fused_ordering(117) 00:16:08.418 fused_ordering(118) 00:16:08.418 fused_ordering(119) 00:16:08.418 fused_ordering(120) 00:16:08.418 fused_ordering(121) 00:16:08.418 fused_ordering(122) 00:16:08.418 fused_ordering(123) 00:16:08.418 fused_ordering(124) 00:16:08.418 fused_ordering(125) 00:16:08.418 fused_ordering(126) 00:16:08.418 fused_ordering(127) 00:16:08.418 fused_ordering(128) 00:16:08.418 fused_ordering(129) 00:16:08.418 fused_ordering(130) 00:16:08.418 fused_ordering(131) 00:16:08.418 fused_ordering(132) 00:16:08.418 fused_ordering(133) 00:16:08.418 fused_ordering(134) 00:16:08.418 fused_ordering(135) 00:16:08.418 fused_ordering(136) 00:16:08.418 fused_ordering(137) 00:16:08.418 fused_ordering(138) 00:16:08.418 fused_ordering(139) 00:16:08.418 fused_ordering(140) 00:16:08.418 fused_ordering(141) 00:16:08.418 fused_ordering(142) 00:16:08.418 fused_ordering(143) 00:16:08.418 fused_ordering(144) 00:16:08.418 fused_ordering(145) 00:16:08.418 fused_ordering(146) 00:16:08.418 fused_ordering(147) 00:16:08.418 fused_ordering(148) 00:16:08.418 fused_ordering(149) 00:16:08.418 fused_ordering(150) 00:16:08.418 fused_ordering(151) 00:16:08.418 fused_ordering(152) 00:16:08.418 fused_ordering(153) 00:16:08.418 fused_ordering(154) 00:16:08.418 fused_ordering(155) 00:16:08.418 fused_ordering(156) 00:16:08.418 fused_ordering(157) 00:16:08.418 fused_ordering(158) 00:16:08.418 fused_ordering(159) 00:16:08.418 fused_ordering(160) 00:16:08.418 fused_ordering(161) 00:16:08.418 fused_ordering(162) 00:16:08.418 fused_ordering(163) 00:16:08.418 fused_ordering(164) 00:16:08.418 fused_ordering(165) 00:16:08.418 fused_ordering(166) 00:16:08.418 fused_ordering(167) 00:16:08.418 fused_ordering(168) 00:16:08.418 fused_ordering(169) 00:16:08.418 fused_ordering(170) 00:16:08.418 fused_ordering(171) 00:16:08.418 fused_ordering(172) 00:16:08.418 fused_ordering(173) 00:16:08.419 fused_ordering(174) 00:16:08.419 fused_ordering(175) 00:16:08.419 fused_ordering(176) 00:16:08.419 fused_ordering(177) 00:16:08.419 fused_ordering(178) 00:16:08.419 fused_ordering(179) 00:16:08.419 fused_ordering(180) 00:16:08.419 fused_ordering(181) 00:16:08.419 fused_ordering(182) 00:16:08.419 fused_ordering(183) 00:16:08.419 fused_ordering(184) 00:16:08.419 fused_ordering(185) 00:16:08.419 fused_ordering(186) 00:16:08.419 fused_ordering(187) 00:16:08.419 fused_ordering(188) 00:16:08.419 fused_ordering(189) 00:16:08.419 fused_ordering(190) 00:16:08.419 fused_ordering(191) 00:16:08.419 fused_ordering(192) 00:16:08.419 fused_ordering(193) 00:16:08.419 fused_ordering(194) 00:16:08.419 fused_ordering(195) 00:16:08.419 fused_ordering(196) 00:16:08.419 fused_ordering(197) 00:16:08.419 fused_ordering(198) 00:16:08.419 fused_ordering(199) 00:16:08.419 fused_ordering(200) 00:16:08.419 fused_ordering(201) 00:16:08.419 fused_ordering(202) 00:16:08.419 fused_ordering(203) 00:16:08.419 fused_ordering(204) 00:16:08.419 fused_ordering(205) 00:16:08.419 fused_ordering(206) 00:16:08.419 fused_ordering(207) 00:16:08.419 fused_ordering(208) 00:16:08.419 fused_ordering(209) 00:16:08.419 fused_ordering(210) 00:16:08.419 fused_ordering(211) 00:16:08.419 fused_ordering(212) 00:16:08.419 fused_ordering(213) 00:16:08.419 fused_ordering(214) 00:16:08.419 fused_ordering(215) 00:16:08.419 fused_ordering(216) 00:16:08.419 fused_ordering(217) 00:16:08.419 fused_ordering(218) 00:16:08.419 fused_ordering(219) 00:16:08.419 fused_ordering(220) 00:16:08.419 fused_ordering(221) 00:16:08.419 fused_ordering(222) 00:16:08.419 fused_ordering(223) 00:16:08.419 fused_ordering(224) 00:16:08.419 fused_ordering(225) 00:16:08.419 fused_ordering(226) 00:16:08.419 fused_ordering(227) 00:16:08.419 fused_ordering(228) 00:16:08.419 fused_ordering(229) 00:16:08.419 fused_ordering(230) 00:16:08.419 fused_ordering(231) 00:16:08.419 fused_ordering(232) 00:16:08.419 fused_ordering(233) 00:16:08.419 fused_ordering(234) 00:16:08.419 fused_ordering(235) 00:16:08.419 fused_ordering(236) 00:16:08.419 fused_ordering(237) 00:16:08.419 fused_ordering(238) 00:16:08.419 fused_ordering(239) 00:16:08.419 fused_ordering(240) 00:16:08.419 fused_ordering(241) 00:16:08.419 fused_ordering(242) 00:16:08.419 fused_ordering(243) 00:16:08.419 fused_ordering(244) 00:16:08.419 fused_ordering(245) 00:16:08.419 fused_ordering(246) 00:16:08.419 fused_ordering(247) 00:16:08.419 fused_ordering(248) 00:16:08.419 fused_ordering(249) 00:16:08.419 fused_ordering(250) 00:16:08.419 fused_ordering(251) 00:16:08.419 fused_ordering(252) 00:16:08.419 fused_ordering(253) 00:16:08.419 fused_ordering(254) 00:16:08.419 fused_ordering(255) 00:16:08.419 fused_ordering(256) 00:16:08.419 fused_ordering(257) 00:16:08.419 fused_ordering(258) 00:16:08.419 fused_ordering(259) 00:16:08.419 fused_ordering(260) 00:16:08.419 fused_ordering(261) 00:16:08.419 fused_ordering(262) 00:16:08.419 fused_ordering(263) 00:16:08.419 fused_ordering(264) 00:16:08.419 fused_ordering(265) 00:16:08.419 fused_ordering(266) 00:16:08.419 fused_ordering(267) 00:16:08.419 fused_ordering(268) 00:16:08.419 fused_ordering(269) 00:16:08.419 fused_ordering(270) 00:16:08.419 fused_ordering(271) 00:16:08.419 fused_ordering(272) 00:16:08.419 fused_ordering(273) 00:16:08.419 fused_ordering(274) 00:16:08.419 fused_ordering(275) 00:16:08.419 fused_ordering(276) 00:16:08.419 fused_ordering(277) 00:16:08.419 fused_ordering(278) 00:16:08.419 fused_ordering(279) 00:16:08.419 fused_ordering(280) 00:16:08.419 fused_ordering(281) 00:16:08.419 fused_ordering(282) 00:16:08.419 fused_ordering(283) 00:16:08.419 fused_ordering(284) 00:16:08.419 fused_ordering(285) 00:16:08.419 fused_ordering(286) 00:16:08.419 fused_ordering(287) 00:16:08.419 fused_ordering(288) 00:16:08.419 fused_ordering(289) 00:16:08.419 fused_ordering(290) 00:16:08.419 fused_ordering(291) 00:16:08.419 fused_ordering(292) 00:16:08.419 fused_ordering(293) 00:16:08.419 fused_ordering(294) 00:16:08.419 fused_ordering(295) 00:16:08.419 fused_ordering(296) 00:16:08.419 fused_ordering(297) 00:16:08.419 fused_ordering(298) 00:16:08.419 fused_ordering(299) 00:16:08.419 fused_ordering(300) 00:16:08.419 fused_ordering(301) 00:16:08.419 fused_ordering(302) 00:16:08.419 fused_ordering(303) 00:16:08.419 fused_ordering(304) 00:16:08.419 fused_ordering(305) 00:16:08.419 fused_ordering(306) 00:16:08.419 fused_ordering(307) 00:16:08.419 fused_ordering(308) 00:16:08.419 fused_ordering(309) 00:16:08.419 fused_ordering(310) 00:16:08.419 fused_ordering(311) 00:16:08.419 fused_ordering(312) 00:16:08.419 fused_ordering(313) 00:16:08.419 fused_ordering(314) 00:16:08.419 fused_ordering(315) 00:16:08.419 fused_ordering(316) 00:16:08.419 fused_ordering(317) 00:16:08.419 fused_ordering(318) 00:16:08.419 fused_ordering(319) 00:16:08.419 fused_ordering(320) 00:16:08.419 fused_ordering(321) 00:16:08.419 fused_ordering(322) 00:16:08.419 fused_ordering(323) 00:16:08.419 fused_ordering(324) 00:16:08.419 fused_ordering(325) 00:16:08.419 fused_ordering(326) 00:16:08.419 fused_ordering(327) 00:16:08.419 fused_ordering(328) 00:16:08.419 fused_ordering(329) 00:16:08.419 fused_ordering(330) 00:16:08.419 fused_ordering(331) 00:16:08.419 fused_ordering(332) 00:16:08.419 fused_ordering(333) 00:16:08.419 fused_ordering(334) 00:16:08.419 fused_ordering(335) 00:16:08.419 fused_ordering(336) 00:16:08.419 fused_ordering(337) 00:16:08.419 fused_ordering(338) 00:16:08.419 fused_ordering(339) 00:16:08.419 fused_ordering(340) 00:16:08.419 fused_ordering(341) 00:16:08.419 fused_ordering(342) 00:16:08.419 fused_ordering(343) 00:16:08.419 fused_ordering(344) 00:16:08.419 fused_ordering(345) 00:16:08.419 fused_ordering(346) 00:16:08.419 fused_ordering(347) 00:16:08.419 fused_ordering(348) 00:16:08.419 fused_ordering(349) 00:16:08.419 fused_ordering(350) 00:16:08.419 fused_ordering(351) 00:16:08.419 fused_ordering(352) 00:16:08.419 fused_ordering(353) 00:16:08.419 fused_ordering(354) 00:16:08.419 fused_ordering(355) 00:16:08.419 fused_ordering(356) 00:16:08.419 fused_ordering(357) 00:16:08.419 fused_ordering(358) 00:16:08.419 fused_ordering(359) 00:16:08.419 fused_ordering(360) 00:16:08.419 fused_ordering(361) 00:16:08.419 fused_ordering(362) 00:16:08.419 fused_ordering(363) 00:16:08.419 fused_ordering(364) 00:16:08.419 fused_ordering(365) 00:16:08.419 fused_ordering(366) 00:16:08.419 fused_ordering(367) 00:16:08.419 fused_ordering(368) 00:16:08.419 fused_ordering(369) 00:16:08.419 fused_ordering(370) 00:16:08.419 fused_ordering(371) 00:16:08.419 fused_ordering(372) 00:16:08.419 fused_ordering(373) 00:16:08.419 fused_ordering(374) 00:16:08.419 fused_ordering(375) 00:16:08.419 fused_ordering(376) 00:16:08.419 fused_ordering(377) 00:16:08.419 fused_ordering(378) 00:16:08.419 fused_ordering(379) 00:16:08.419 fused_ordering(380) 00:16:08.419 fused_ordering(381) 00:16:08.419 fused_ordering(382) 00:16:08.419 fused_ordering(383) 00:16:08.419 fused_ordering(384) 00:16:08.419 fused_ordering(385) 00:16:08.419 fused_ordering(386) 00:16:08.419 fused_ordering(387) 00:16:08.419 fused_ordering(388) 00:16:08.419 fused_ordering(389) 00:16:08.419 fused_ordering(390) 00:16:08.419 fused_ordering(391) 00:16:08.419 fused_ordering(392) 00:16:08.419 fused_ordering(393) 00:16:08.419 fused_ordering(394) 00:16:08.419 fused_ordering(395) 00:16:08.419 fused_ordering(396) 00:16:08.419 fused_ordering(397) 00:16:08.419 fused_ordering(398) 00:16:08.419 fused_ordering(399) 00:16:08.419 fused_ordering(400) 00:16:08.419 fused_ordering(401) 00:16:08.419 fused_ordering(402) 00:16:08.419 fused_ordering(403) 00:16:08.419 fused_ordering(404) 00:16:08.419 fused_ordering(405) 00:16:08.419 fused_ordering(406) 00:16:08.419 fused_ordering(407) 00:16:08.419 fused_ordering(408) 00:16:08.419 fused_ordering(409) 00:16:08.419 fused_ordering(410) 00:16:08.678 fused_ordering(411) 00:16:08.678 fused_ordering(412) 00:16:08.678 fused_ordering(413) 00:16:08.678 fused_ordering(414) 00:16:08.678 fused_ordering(415) 00:16:08.678 fused_ordering(416) 00:16:08.678 fused_ordering(417) 00:16:08.678 fused_ordering(418) 00:16:08.678 fused_ordering(419) 00:16:08.678 fused_ordering(420) 00:16:08.678 fused_ordering(421) 00:16:08.678 fused_ordering(422) 00:16:08.678 fused_ordering(423) 00:16:08.678 fused_ordering(424) 00:16:08.678 fused_ordering(425) 00:16:08.678 fused_ordering(426) 00:16:08.678 fused_ordering(427) 00:16:08.678 fused_ordering(428) 00:16:08.678 fused_ordering(429) 00:16:08.678 fused_ordering(430) 00:16:08.678 fused_ordering(431) 00:16:08.678 fused_ordering(432) 00:16:08.678 fused_ordering(433) 00:16:08.678 fused_ordering(434) 00:16:08.678 fused_ordering(435) 00:16:08.678 fused_ordering(436) 00:16:08.678 fused_ordering(437) 00:16:08.678 fused_ordering(438) 00:16:08.678 fused_ordering(439) 00:16:08.678 fused_ordering(440) 00:16:08.678 fused_ordering(441) 00:16:08.678 fused_ordering(442) 00:16:08.678 fused_ordering(443) 00:16:08.678 fused_ordering(444) 00:16:08.678 fused_ordering(445) 00:16:08.678 fused_ordering(446) 00:16:08.678 fused_ordering(447) 00:16:08.678 fused_ordering(448) 00:16:08.678 fused_ordering(449) 00:16:08.678 fused_ordering(450) 00:16:08.678 fused_ordering(451) 00:16:08.678 fused_ordering(452) 00:16:08.678 fused_ordering(453) 00:16:08.678 fused_ordering(454) 00:16:08.678 fused_ordering(455) 00:16:08.678 fused_ordering(456) 00:16:08.678 fused_ordering(457) 00:16:08.678 fused_ordering(458) 00:16:08.678 fused_ordering(459) 00:16:08.678 fused_ordering(460) 00:16:08.678 fused_ordering(461) 00:16:08.678 fused_ordering(462) 00:16:08.678 fused_ordering(463) 00:16:08.678 fused_ordering(464) 00:16:08.678 fused_ordering(465) 00:16:08.678 fused_ordering(466) 00:16:08.678 fused_ordering(467) 00:16:08.678 fused_ordering(468) 00:16:08.678 fused_ordering(469) 00:16:08.678 fused_ordering(470) 00:16:08.678 fused_ordering(471) 00:16:08.678 fused_ordering(472) 00:16:08.678 fused_ordering(473) 00:16:08.678 fused_ordering(474) 00:16:08.678 fused_ordering(475) 00:16:08.678 fused_ordering(476) 00:16:08.678 fused_ordering(477) 00:16:08.678 fused_ordering(478) 00:16:08.678 fused_ordering(479) 00:16:08.678 fused_ordering(480) 00:16:08.678 fused_ordering(481) 00:16:08.678 fused_ordering(482) 00:16:08.678 fused_ordering(483) 00:16:08.678 fused_ordering(484) 00:16:08.678 fused_ordering(485) 00:16:08.678 fused_ordering(486) 00:16:08.678 fused_ordering(487) 00:16:08.678 fused_ordering(488) 00:16:08.678 fused_ordering(489) 00:16:08.678 fused_ordering(490) 00:16:08.678 fused_ordering(491) 00:16:08.678 fused_ordering(492) 00:16:08.678 fused_ordering(493) 00:16:08.678 fused_ordering(494) 00:16:08.678 fused_ordering(495) 00:16:08.678 fused_ordering(496) 00:16:08.678 fused_ordering(497) 00:16:08.678 fused_ordering(498) 00:16:08.678 fused_ordering(499) 00:16:08.678 fused_ordering(500) 00:16:08.678 fused_ordering(501) 00:16:08.678 fused_ordering(502) 00:16:08.678 fused_ordering(503) 00:16:08.678 fused_ordering(504) 00:16:08.678 fused_ordering(505) 00:16:08.678 fused_ordering(506) 00:16:08.678 fused_ordering(507) 00:16:08.678 fused_ordering(508) 00:16:08.678 fused_ordering(509) 00:16:08.678 fused_ordering(510) 00:16:08.678 fused_ordering(511) 00:16:08.678 fused_ordering(512) 00:16:08.678 fused_ordering(513) 00:16:08.678 fused_ordering(514) 00:16:08.678 fused_ordering(515) 00:16:08.678 fused_ordering(516) 00:16:08.678 fused_ordering(517) 00:16:08.678 fused_ordering(518) 00:16:08.678 fused_ordering(519) 00:16:08.678 fused_ordering(520) 00:16:08.678 fused_ordering(521) 00:16:08.678 fused_ordering(522) 00:16:08.678 fused_ordering(523) 00:16:08.678 fused_ordering(524) 00:16:08.678 fused_ordering(525) 00:16:08.678 fused_ordering(526) 00:16:08.678 fused_ordering(527) 00:16:08.678 fused_ordering(528) 00:16:08.678 fused_ordering(529) 00:16:08.678 fused_ordering(530) 00:16:08.678 fused_ordering(531) 00:16:08.678 fused_ordering(532) 00:16:08.678 fused_ordering(533) 00:16:08.678 fused_ordering(534) 00:16:08.678 fused_ordering(535) 00:16:08.678 fused_ordering(536) 00:16:08.678 fused_ordering(537) 00:16:08.678 fused_ordering(538) 00:16:08.678 fused_ordering(539) 00:16:08.678 fused_ordering(540) 00:16:08.678 fused_ordering(541) 00:16:08.678 fused_ordering(542) 00:16:08.678 fused_ordering(543) 00:16:08.678 fused_ordering(544) 00:16:08.678 fused_ordering(545) 00:16:08.678 fused_ordering(546) 00:16:08.678 fused_ordering(547) 00:16:08.678 fused_ordering(548) 00:16:08.678 fused_ordering(549) 00:16:08.678 fused_ordering(550) 00:16:08.678 fused_ordering(551) 00:16:08.678 fused_ordering(552) 00:16:08.678 fused_ordering(553) 00:16:08.678 fused_ordering(554) 00:16:08.678 fused_ordering(555) 00:16:08.678 fused_ordering(556) 00:16:08.678 fused_ordering(557) 00:16:08.678 fused_ordering(558) 00:16:08.678 fused_ordering(559) 00:16:08.678 fused_ordering(560) 00:16:08.678 fused_ordering(561) 00:16:08.678 fused_ordering(562) 00:16:08.678 fused_ordering(563) 00:16:08.678 fused_ordering(564) 00:16:08.678 fused_ordering(565) 00:16:08.678 fused_ordering(566) 00:16:08.678 fused_ordering(567) 00:16:08.678 fused_ordering(568) 00:16:08.678 fused_ordering(569) 00:16:08.678 fused_ordering(570) 00:16:08.678 fused_ordering(571) 00:16:08.678 fused_ordering(572) 00:16:08.678 fused_ordering(573) 00:16:08.678 fused_ordering(574) 00:16:08.678 fused_ordering(575) 00:16:08.678 fused_ordering(576) 00:16:08.678 fused_ordering(577) 00:16:08.678 fused_ordering(578) 00:16:08.678 fused_ordering(579) 00:16:08.678 fused_ordering(580) 00:16:08.678 fused_ordering(581) 00:16:08.678 fused_ordering(582) 00:16:08.678 fused_ordering(583) 00:16:08.678 fused_ordering(584) 00:16:08.678 fused_ordering(585) 00:16:08.678 fused_ordering(586) 00:16:08.678 fused_ordering(587) 00:16:08.678 fused_ordering(588) 00:16:08.678 fused_ordering(589) 00:16:08.678 fused_ordering(590) 00:16:08.678 fused_ordering(591) 00:16:08.678 fused_ordering(592) 00:16:08.678 fused_ordering(593) 00:16:08.678 fused_ordering(594) 00:16:08.678 fused_ordering(595) 00:16:08.678 fused_ordering(596) 00:16:08.678 fused_ordering(597) 00:16:08.678 fused_ordering(598) 00:16:08.678 fused_ordering(599) 00:16:08.678 fused_ordering(600) 00:16:08.678 fused_ordering(601) 00:16:08.678 fused_ordering(602) 00:16:08.678 fused_ordering(603) 00:16:08.678 fused_ordering(604) 00:16:08.678 fused_ordering(605) 00:16:08.678 fused_ordering(606) 00:16:08.678 fused_ordering(607) 00:16:08.678 fused_ordering(608) 00:16:08.678 fused_ordering(609) 00:16:08.678 fused_ordering(610) 00:16:08.678 fused_ordering(611) 00:16:08.678 fused_ordering(612) 00:16:08.678 fused_ordering(613) 00:16:08.678 fused_ordering(614) 00:16:08.678 fused_ordering(615) 00:16:09.244 fused_ordering(616) 00:16:09.244 fused_ordering(617) 00:16:09.244 fused_ordering(618) 00:16:09.244 fused_ordering(619) 00:16:09.244 fused_ordering(620) 00:16:09.244 fused_ordering(621) 00:16:09.244 fused_ordering(622) 00:16:09.244 fused_ordering(623) 00:16:09.244 fused_ordering(624) 00:16:09.244 fused_ordering(625) 00:16:09.244 fused_ordering(626) 00:16:09.244 fused_ordering(627) 00:16:09.244 fused_ordering(628) 00:16:09.244 fused_ordering(629) 00:16:09.244 fused_ordering(630) 00:16:09.244 fused_ordering(631) 00:16:09.244 fused_ordering(632) 00:16:09.244 fused_ordering(633) 00:16:09.244 fused_ordering(634) 00:16:09.244 fused_ordering(635) 00:16:09.244 fused_ordering(636) 00:16:09.244 fused_ordering(637) 00:16:09.244 fused_ordering(638) 00:16:09.244 fused_ordering(639) 00:16:09.244 fused_ordering(640) 00:16:09.244 fused_ordering(641) 00:16:09.244 fused_ordering(642) 00:16:09.244 fused_ordering(643) 00:16:09.244 fused_ordering(644) 00:16:09.244 fused_ordering(645) 00:16:09.244 fused_ordering(646) 00:16:09.244 fused_ordering(647) 00:16:09.244 fused_ordering(648) 00:16:09.244 fused_ordering(649) 00:16:09.244 fused_ordering(650) 00:16:09.244 fused_ordering(651) 00:16:09.244 fused_ordering(652) 00:16:09.244 fused_ordering(653) 00:16:09.244 fused_ordering(654) 00:16:09.244 fused_ordering(655) 00:16:09.244 fused_ordering(656) 00:16:09.244 fused_ordering(657) 00:16:09.244 fused_ordering(658) 00:16:09.244 fused_ordering(659) 00:16:09.244 fused_ordering(660) 00:16:09.244 fused_ordering(661) 00:16:09.244 fused_ordering(662) 00:16:09.244 fused_ordering(663) 00:16:09.244 fused_ordering(664) 00:16:09.244 fused_ordering(665) 00:16:09.244 fused_ordering(666) 00:16:09.244 fused_ordering(667) 00:16:09.244 fused_ordering(668) 00:16:09.244 fused_ordering(669) 00:16:09.244 fused_ordering(670) 00:16:09.244 fused_ordering(671) 00:16:09.244 fused_ordering(672) 00:16:09.244 fused_ordering(673) 00:16:09.244 fused_ordering(674) 00:16:09.244 fused_ordering(675) 00:16:09.244 fused_ordering(676) 00:16:09.244 fused_ordering(677) 00:16:09.244 fused_ordering(678) 00:16:09.244 fused_ordering(679) 00:16:09.244 fused_ordering(680) 00:16:09.244 fused_ordering(681) 00:16:09.244 fused_ordering(682) 00:16:09.244 fused_ordering(683) 00:16:09.244 fused_ordering(684) 00:16:09.244 fused_ordering(685) 00:16:09.244 fused_ordering(686) 00:16:09.244 fused_ordering(687) 00:16:09.244 fused_ordering(688) 00:16:09.244 fused_ordering(689) 00:16:09.244 fused_ordering(690) 00:16:09.244 fused_ordering(691) 00:16:09.244 fused_ordering(692) 00:16:09.244 fused_ordering(693) 00:16:09.244 fused_ordering(694) 00:16:09.244 fused_ordering(695) 00:16:09.244 fused_ordering(696) 00:16:09.244 fused_ordering(697) 00:16:09.244 fused_ordering(698) 00:16:09.244 fused_ordering(699) 00:16:09.244 fused_ordering(700) 00:16:09.244 fused_ordering(701) 00:16:09.244 fused_ordering(702) 00:16:09.244 fused_ordering(703) 00:16:09.244 fused_ordering(704) 00:16:09.244 fused_ordering(705) 00:16:09.244 fused_ordering(706) 00:16:09.244 fused_ordering(707) 00:16:09.244 fused_ordering(708) 00:16:09.244 fused_ordering(709) 00:16:09.244 fused_ordering(710) 00:16:09.244 fused_ordering(711) 00:16:09.244 fused_ordering(712) 00:16:09.244 fused_ordering(713) 00:16:09.244 fused_ordering(714) 00:16:09.244 fused_ordering(715) 00:16:09.244 fused_ordering(716) 00:16:09.244 fused_ordering(717) 00:16:09.244 fused_ordering(718) 00:16:09.244 fused_ordering(719) 00:16:09.244 fused_ordering(720) 00:16:09.244 fused_ordering(721) 00:16:09.245 fused_ordering(722) 00:16:09.245 fused_ordering(723) 00:16:09.245 fused_ordering(724) 00:16:09.245 fused_ordering(725) 00:16:09.245 fused_ordering(726) 00:16:09.245 fused_ordering(727) 00:16:09.245 fused_ordering(728) 00:16:09.245 fused_ordering(729) 00:16:09.245 fused_ordering(730) 00:16:09.245 fused_ordering(731) 00:16:09.245 fused_ordering(732) 00:16:09.245 fused_ordering(733) 00:16:09.245 fused_ordering(734) 00:16:09.245 fused_ordering(735) 00:16:09.245 fused_ordering(736) 00:16:09.245 fused_ordering(737) 00:16:09.245 fused_ordering(738) 00:16:09.245 fused_ordering(739) 00:16:09.245 fused_ordering(740) 00:16:09.245 fused_ordering(741) 00:16:09.245 fused_ordering(742) 00:16:09.245 fused_ordering(743) 00:16:09.245 fused_ordering(744) 00:16:09.245 fused_ordering(745) 00:16:09.245 fused_ordering(746) 00:16:09.245 fused_ordering(747) 00:16:09.245 fused_ordering(748) 00:16:09.245 fused_ordering(749) 00:16:09.245 fused_ordering(750) 00:16:09.245 fused_ordering(751) 00:16:09.245 fused_ordering(752) 00:16:09.245 fused_ordering(753) 00:16:09.245 fused_ordering(754) 00:16:09.245 fused_ordering(755) 00:16:09.245 fused_ordering(756) 00:16:09.245 fused_ordering(757) 00:16:09.245 fused_ordering(758) 00:16:09.245 fused_ordering(759) 00:16:09.245 fused_ordering(760) 00:16:09.245 fused_ordering(761) 00:16:09.245 fused_ordering(762) 00:16:09.245 fused_ordering(763) 00:16:09.245 fused_ordering(764) 00:16:09.245 fused_ordering(765) 00:16:09.245 fused_ordering(766) 00:16:09.245 fused_ordering(767) 00:16:09.245 fused_ordering(768) 00:16:09.245 fused_ordering(769) 00:16:09.245 fused_ordering(770) 00:16:09.245 fused_ordering(771) 00:16:09.245 fused_ordering(772) 00:16:09.245 fused_ordering(773) 00:16:09.245 fused_ordering(774) 00:16:09.245 fused_ordering(775) 00:16:09.245 fused_ordering(776) 00:16:09.245 fused_ordering(777) 00:16:09.245 fused_ordering(778) 00:16:09.245 fused_ordering(779) 00:16:09.245 fused_ordering(780) 00:16:09.245 fused_ordering(781) 00:16:09.245 fused_ordering(782) 00:16:09.245 fused_ordering(783) 00:16:09.245 fused_ordering(784) 00:16:09.245 fused_ordering(785) 00:16:09.245 fused_ordering(786) 00:16:09.245 fused_ordering(787) 00:16:09.245 fused_ordering(788) 00:16:09.245 fused_ordering(789) 00:16:09.245 fused_ordering(790) 00:16:09.245 fused_ordering(791) 00:16:09.245 fused_ordering(792) 00:16:09.245 fused_ordering(793) 00:16:09.245 fused_ordering(794) 00:16:09.245 fused_ordering(795) 00:16:09.245 fused_ordering(796) 00:16:09.245 fused_ordering(797) 00:16:09.245 fused_ordering(798) 00:16:09.245 fused_ordering(799) 00:16:09.245 fused_ordering(800) 00:16:09.245 fused_ordering(801) 00:16:09.245 fused_ordering(802) 00:16:09.245 fused_ordering(803) 00:16:09.245 fused_ordering(804) 00:16:09.245 fused_ordering(805) 00:16:09.245 fused_ordering(806) 00:16:09.245 fused_ordering(807) 00:16:09.245 fused_ordering(808) 00:16:09.245 fused_ordering(809) 00:16:09.245 fused_ordering(810) 00:16:09.245 fused_ordering(811) 00:16:09.245 fused_ordering(812) 00:16:09.245 fused_ordering(813) 00:16:09.245 fused_ordering(814) 00:16:09.245 fused_ordering(815) 00:16:09.245 fused_ordering(816) 00:16:09.245 fused_ordering(817) 00:16:09.245 fused_ordering(818) 00:16:09.245 fused_ordering(819) 00:16:09.245 fused_ordering(820) 00:16:09.504 fused_ordering(821) 00:16:09.504 fused_ordering(822) 00:16:09.504 fused_ordering(823) 00:16:09.504 fused_ordering(824) 00:16:09.504 fused_ordering(825) 00:16:09.504 fused_ordering(826) 00:16:09.504 fused_ordering(827) 00:16:09.504 fused_ordering(828) 00:16:09.504 fused_ordering(829) 00:16:09.504 fused_ordering(830) 00:16:09.504 fused_ordering(831) 00:16:09.504 fused_ordering(832) 00:16:09.504 fused_ordering(833) 00:16:09.504 fused_ordering(834) 00:16:09.504 fused_ordering(835) 00:16:09.504 fused_ordering(836) 00:16:09.504 fused_ordering(837) 00:16:09.504 fused_ordering(838) 00:16:09.504 fused_ordering(839) 00:16:09.504 fused_ordering(840) 00:16:09.504 fused_ordering(841) 00:16:09.504 fused_ordering(842) 00:16:09.504 fused_ordering(843) 00:16:09.504 fused_ordering(844) 00:16:09.504 fused_ordering(845) 00:16:09.504 fused_ordering(846) 00:16:09.504 fused_ordering(847) 00:16:09.504 fused_ordering(848) 00:16:09.504 fused_ordering(849) 00:16:09.504 fused_ordering(850) 00:16:09.504 fused_ordering(851) 00:16:09.504 fused_ordering(852) 00:16:09.504 fused_ordering(853) 00:16:09.504 fused_ordering(854) 00:16:09.504 fused_ordering(855) 00:16:09.504 fused_ordering(856) 00:16:09.504 fused_ordering(857) 00:16:09.504 fused_ordering(858) 00:16:09.504 fused_ordering(859) 00:16:09.504 fused_ordering(860) 00:16:09.504 fused_ordering(861) 00:16:09.504 fused_ordering(862) 00:16:09.504 fused_ordering(863) 00:16:09.504 fused_ordering(864) 00:16:09.504 fused_ordering(865) 00:16:09.504 fused_ordering(866) 00:16:09.504 fused_ordering(867) 00:16:09.504 fused_ordering(868) 00:16:09.504 fused_ordering(869) 00:16:09.504 fused_ordering(870) 00:16:09.504 fused_ordering(871) 00:16:09.504 fused_ordering(872) 00:16:09.504 fused_ordering(873) 00:16:09.504 fused_ordering(874) 00:16:09.504 fused_ordering(875) 00:16:09.504 fused_ordering(876) 00:16:09.504 fused_ordering(877) 00:16:09.504 fused_ordering(878) 00:16:09.504 fused_ordering(879) 00:16:09.504 fused_ordering(880) 00:16:09.504 fused_ordering(881) 00:16:09.504 fused_ordering(882) 00:16:09.504 fused_ordering(883) 00:16:09.504 fused_ordering(884) 00:16:09.504 fused_ordering(885) 00:16:09.504 fused_ordering(886) 00:16:09.504 fused_ordering(887) 00:16:09.504 fused_ordering(888) 00:16:09.504 fused_ordering(889) 00:16:09.504 fused_ordering(890) 00:16:09.504 fused_ordering(891) 00:16:09.504 fused_ordering(892) 00:16:09.504 fused_ordering(893) 00:16:09.504 fused_ordering(894) 00:16:09.504 fused_ordering(895) 00:16:09.504 fused_ordering(896) 00:16:09.504 fused_ordering(897) 00:16:09.504 fused_ordering(898) 00:16:09.504 fused_ordering(899) 00:16:09.504 fused_ordering(900) 00:16:09.504 fused_ordering(901) 00:16:09.504 fused_ordering(902) 00:16:09.504 fused_ordering(903) 00:16:09.504 fused_ordering(904) 00:16:09.504 fused_ordering(905) 00:16:09.504 fused_ordering(906) 00:16:09.504 fused_ordering(907) 00:16:09.504 fused_ordering(908) 00:16:09.504 fused_ordering(909) 00:16:09.504 fused_ordering(910) 00:16:09.504 fused_ordering(911) 00:16:09.504 fused_ordering(912) 00:16:09.504 fused_ordering(913) 00:16:09.504 fused_ordering(914) 00:16:09.504 fused_ordering(915) 00:16:09.504 fused_ordering(916) 00:16:09.505 fused_ordering(917) 00:16:09.505 fused_ordering(918) 00:16:09.505 fused_ordering(919) 00:16:09.505 fused_ordering(920) 00:16:09.505 fused_ordering(921) 00:16:09.505 fused_ordering(922) 00:16:09.505 fused_ordering(923) 00:16:09.505 fused_ordering(924) 00:16:09.505 fused_ordering(925) 00:16:09.505 fused_ordering(926) 00:16:09.505 fused_ordering(927) 00:16:09.505 fused_ordering(928) 00:16:09.505 fused_ordering(929) 00:16:09.505 fused_ordering(930) 00:16:09.505 fused_ordering(931) 00:16:09.505 fused_ordering(932) 00:16:09.505 fused_ordering(933) 00:16:09.505 fused_ordering(934) 00:16:09.505 fused_ordering(935) 00:16:09.505 fused_ordering(936) 00:16:09.505 fused_ordering(937) 00:16:09.505 fused_ordering(938) 00:16:09.505 fused_ordering(939) 00:16:09.505 fused_ordering(940) 00:16:09.505 fused_ordering(941) 00:16:09.505 fused_ordering(942) 00:16:09.505 fused_ordering(943) 00:16:09.505 fused_ordering(944) 00:16:09.505 fused_ordering(945) 00:16:09.505 fused_ordering(946) 00:16:09.505 fused_ordering(947) 00:16:09.505 fused_ordering(948) 00:16:09.505 fused_ordering(949) 00:16:09.505 fused_ordering(950) 00:16:09.505 fused_ordering(951) 00:16:09.505 fused_ordering(952) 00:16:09.505 fused_ordering(953) 00:16:09.505 fused_ordering(954) 00:16:09.505 fused_ordering(955) 00:16:09.505 fused_ordering(956) 00:16:09.505 fused_ordering(957) 00:16:09.505 fused_ordering(958) 00:16:09.505 fused_ordering(959) 00:16:09.505 fused_ordering(960) 00:16:09.505 fused_ordering(961) 00:16:09.505 fused_ordering(962) 00:16:09.505 fused_ordering(963) 00:16:09.505 fused_ordering(964) 00:16:09.505 fused_ordering(965) 00:16:09.505 fused_ordering(966) 00:16:09.505 fused_ordering(967) 00:16:09.505 fused_ordering(968) 00:16:09.505 fused_ordering(969) 00:16:09.505 fused_ordering(970) 00:16:09.505 fused_ordering(971) 00:16:09.505 fused_ordering(972) 00:16:09.505 fused_ordering(973) 00:16:09.505 fused_ordering(974) 00:16:09.505 fused_ordering(975) 00:16:09.505 fused_ordering(976) 00:16:09.505 fused_ordering(977) 00:16:09.505 fused_ordering(978) 00:16:09.505 fused_ordering(979) 00:16:09.505 fused_ordering(980) 00:16:09.505 fused_ordering(981) 00:16:09.505 fused_ordering(982) 00:16:09.505 fused_ordering(983) 00:16:09.505 fused_ordering(984) 00:16:09.505 fused_ordering(985) 00:16:09.505 fused_ordering(986) 00:16:09.505 fused_ordering(987) 00:16:09.505 fused_ordering(988) 00:16:09.505 fused_ordering(989) 00:16:09.505 fused_ordering(990) 00:16:09.505 fused_ordering(991) 00:16:09.505 fused_ordering(992) 00:16:09.505 fused_ordering(993) 00:16:09.505 fused_ordering(994) 00:16:09.505 fused_ordering(995) 00:16:09.505 fused_ordering(996) 00:16:09.505 fused_ordering(997) 00:16:09.505 fused_ordering(998) 00:16:09.505 fused_ordering(999) 00:16:09.505 fused_ordering(1000) 00:16:09.505 fused_ordering(1001) 00:16:09.505 fused_ordering(1002) 00:16:09.505 fused_ordering(1003) 00:16:09.505 fused_ordering(1004) 00:16:09.505 fused_ordering(1005) 00:16:09.505 fused_ordering(1006) 00:16:09.505 fused_ordering(1007) 00:16:09.505 fused_ordering(1008) 00:16:09.505 fused_ordering(1009) 00:16:09.505 fused_ordering(1010) 00:16:09.505 fused_ordering(1011) 00:16:09.505 fused_ordering(1012) 00:16:09.505 fused_ordering(1013) 00:16:09.505 fused_ordering(1014) 00:16:09.505 fused_ordering(1015) 00:16:09.505 fused_ordering(1016) 00:16:09.505 fused_ordering(1017) 00:16:09.505 fused_ordering(1018) 00:16:09.505 fused_ordering(1019) 00:16:09.505 fused_ordering(1020) 00:16:09.505 fused_ordering(1021) 00:16:09.505 fused_ordering(1022) 00:16:09.505 fused_ordering(1023) 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.505 rmmod nvme_tcp 00:16:09.505 rmmod nvme_fabrics 00:16:09.505 rmmod nvme_keyring 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3422634 ']' 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3422634 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3422634 ']' 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3422634 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3422634 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3422634' 00:16:09.505 killing process with pid 3422634 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3422634 00:16:09.505 00:52:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3422634 00:16:09.505 [2024-05-15 00:52:56.555093] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.071 00:52:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.603 00:52:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.603 00:16:12.603 real 0m10.305s 00:16:12.603 user 0m5.579s 00:16:12.603 sys 0m4.590s 00:16:12.603 00:52:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:12.603 00:52:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.603 ************************************ 00:16:12.603 END TEST nvmf_fused_ordering 00:16:12.603 ************************************ 00:16:12.603 00:52:59 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:12.603 00:52:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:12.604 00:52:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.604 00:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.604 ************************************ 00:16:12.604 START TEST nvmf_delete_subsystem 00:16:12.604 ************************************ 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:12.604 * Looking for test storage... 00:16:12.604 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.604 00:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:17.875 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.875 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:17.875 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:17.876 Found net devices under 0000:27:00.0: cvl_0_0 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:17.876 Found net devices under 0000:27:00.1: cvl_0_1 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:16:17.876 00:16:17.876 --- 10.0.0.2 ping statistics --- 00:16:17.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.876 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:16:17.876 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:16:18.137 00:16:18.137 --- 10.0.0.1 ping statistics --- 00:16:18.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.137 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3427206 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3427206 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3427206 ']' 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.137 00:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.137 [2024-05-15 00:53:05.023920] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:18.137 [2024-05-15 00:53:05.023994] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.137 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.137 [2024-05-15 00:53:05.123859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:18.398 [2024-05-15 00:53:05.221517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.398 [2024-05-15 00:53:05.221556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.398 [2024-05-15 00:53:05.221566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.398 [2024-05-15 00:53:05.221575] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.398 [2024-05-15 00:53:05.221582] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.398 [2024-05-15 00:53:05.221669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.398 [2024-05-15 00:53:05.221682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 [2024-05-15 00:53:05.799269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 [2024-05-15 00:53:05.815303] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:18.969 [2024-05-15 00:53:05.815588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 NULL1 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 Delay0 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3427390 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:18.969 00:53:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:18.969 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.969 [2024-05-15 00:53:05.940548] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:20.873 00:53:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.873 00:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.874 00:53:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 starting I/O failed: -6 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 [2024-05-15 00:53:08.126312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(5) to be set 00:16:21.132 Write completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.132 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 starting I/O failed: -6 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 [2024-05-15 00:53:08.127148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025100 is same with the state(5) to be set 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Write completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:21.133 Read completed with error (sct=0, sc=8) 00:16:22.066 [2024-05-15 00:53:09.082106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000024c00 is same with the state(5) to be set 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Write completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Write completed with error (sct=0, sc=8) 00:16:22.066 Write completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.066 Write completed with error (sct=0, sc=8) 00:16:22.066 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 [2024-05-15 00:53:09.126202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030280 is same with the state(5) to be set 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 [2024-05-15 00:53:09.126385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030780 is same with the state(5) to be set 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 [2024-05-15 00:53:09.126989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025880 is same with the state(5) to be set 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 Read completed with error (sct=0, sc=8) 00:16:22.067 Write completed with error (sct=0, sc=8) 00:16:22.067 [2024-05-15 00:53:09.127190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025380 is same with the state(5) to be set 00:16:22.327 Initializing NVMe Controllers 00:16:22.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:22.327 Controller IO queue size 128, less than required. 00:16:22.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:22.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:22.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:22.327 Initialization complete. Launching workers. 00:16:22.327 ======================================================== 00:16:22.327 Latency(us) 00:16:22.327 Device Information : IOPS MiB/s Average min max 00:16:22.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.50 0.09 882575.32 482.24 1014819.40 00:16:22.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.01 0.09 884152.96 620.92 1014847.46 00:16:22.327 ======================================================== 00:16:22.327 Total : 351.50 0.17 883360.80 482.24 1014847.46 00:16:22.327 00:16:22.327 [2024-05-15 00:53:09.129316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000024c00 (9): Bad file descriptor 00:16:22.327 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:22.327 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.327 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:22.327 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3427390 00:16:22.327 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3427390 00:16:22.587 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3427390) - No such process 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3427390 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3427390 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3427390 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.587 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 [2024-05-15 00:53:09.655346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3428099 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:22.845 00:53:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:22.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.845 [2024-05-15 00:53:09.752576] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:23.412 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:23.412 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:23.412 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:23.672 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:23.672 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:23.672 00:53:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:24.243 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:24.243 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:24.243 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:24.808 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:24.808 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:24.808 00:53:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:25.376 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:25.376 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:25.376 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:25.634 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:25.634 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:25.634 00:53:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:26.199 Initializing NVMe Controllers 00:16:26.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.199 Controller IO queue size 128, less than required. 00:16:26.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:26.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:26.199 Initialization complete. Launching workers. 00:16:26.199 ======================================================== 00:16:26.199 Latency(us) 00:16:26.199 Device Information : IOPS MiB/s Average min max 00:16:26.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004253.21 1000152.95 1040991.40 00:16:26.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003439.05 1000200.94 1043650.15 00:16:26.199 ======================================================== 00:16:26.199 Total : 256.00 0.12 1003846.13 1000152.95 1043650.15 00:16:26.199 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3428099 00:16:26.199 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3428099) - No such process 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3428099 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.199 rmmod nvme_tcp 00:16:26.199 rmmod nvme_fabrics 00:16:26.199 rmmod nvme_keyring 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3427206 ']' 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3427206 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3427206 ']' 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3427206 00:16:26.199 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3427206 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3427206' 00:16:26.459 killing process with pid 3427206 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3427206 00:16:26.459 [2024-05-15 00:53:13.299915] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:26.459 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3427206 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.719 00:53:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.307 00:53:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:29.307 00:16:29.307 real 0m16.667s 00:16:29.307 user 0m30.975s 00:16:29.307 sys 0m4.941s 00:16:29.307 00:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:29.307 00:53:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:29.307 ************************************ 00:16:29.307 END TEST nvmf_delete_subsystem 00:16:29.307 ************************************ 00:16:29.307 00:53:15 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:29.307 00:53:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:29.307 00:53:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:29.307 00:53:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.307 ************************************ 00:16:29.307 START TEST nvmf_ns_masking 00:16:29.307 ************************************ 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:29.307 * Looking for test storage... 00:16:29.307 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.307 00:53:15 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=94a37f7d-9dfe-4c28-a2cd-df179b29d0b3 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.308 00:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:35.874 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:35.874 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:35.874 Found net devices under 0000:27:00.0: cvl_0_0 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:35.874 Found net devices under 0000:27:00.1: cvl_0_1 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.874 00:53:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:35.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:16:35.874 00:16:35.874 --- 10.0.0.2 ping statistics --- 00:16:35.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.874 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:35.874 00:16:35.874 --- 10.0.0.1 ping statistics --- 00:16:35.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.874 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:35.874 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3432905 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3432905 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3432905 ']' 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:35.875 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.875 [2024-05-15 00:53:22.245680] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:35.875 [2024-05-15 00:53:22.245780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.875 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.875 [2024-05-15 00:53:22.341880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.875 [2024-05-15 00:53:22.438833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.875 [2024-05-15 00:53:22.438869] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.875 [2024-05-15 00:53:22.438878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.875 [2024-05-15 00:53:22.438887] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.875 [2024-05-15 00:53:22.438894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.875 [2024-05-15 00:53:22.439002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.875 [2024-05-15 00:53:22.439083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.875 [2024-05-15 00:53:22.439164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.875 [2024-05-15 00:53:22.439174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.136 00:53:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.136 [2024-05-15 00:53:23.109470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.136 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:16:36.136 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:16:36.136 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:36.396 Malloc1 00:16:36.397 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:36.655 Malloc2 00:16:36.655 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.655 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:36.912 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.913 [2024-05-15 00:53:23.957337] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:36.913 [2024-05-15 00:53:23.957616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.170 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:16:37.171 00:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94a37f7d-9dfe-4c28-a2cd-df179b29d0b3 -a 10.0.0.2 -s 4420 -i 4 00:16:37.171 00:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.171 00:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:37.171 00:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.171 00:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:37.171 00:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:39.709 [ 0]:0x1 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fc7126f434ba4bf3b50b6c9c25d17181 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fc7126f434ba4bf3b50b6c9c25d17181 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:39.709 [ 0]:0x1 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:39.709 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fc7126f434ba4bf3b50b6c9c25d17181 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fc7126f434ba4bf3b50b6c9c25d17181 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:39.710 [ 1]:0x2 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.710 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.968 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:39.968 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:16:39.968 00:53:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94a37f7d-9dfe-4c28-a2cd-df179b29d0b3 -a 10.0.0.2 -s 4420 -i 4 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:16:40.226 00:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:42.133 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:42.394 [ 0]:0x2 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.394 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:42.654 [ 0]:0x1 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fc7126f434ba4bf3b50b6c9c25d17181 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fc7126f434ba4bf3b50b6c9c25d17181 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:42.654 [ 1]:0x2 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.654 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:42.914 [ 0]:0x2 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.914 00:53:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:43.172 00:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:16:43.172 00:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94a37f7d-9dfe-4c28-a2cd-df179b29d0b3 -a 10.0.0.2 -s 4420 -i 4 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:43.429 00:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.334 [ 0]:0x1 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fc7126f434ba4bf3b50b6c9c25d17181 00:16:45.334 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fc7126f434ba4bf3b50b6c9c25d17181 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.335 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:45.335 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.335 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.335 [ 1]:0x2 00:16:45.335 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.335 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.594 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.595 [ 0]:0x2 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:16:45.595 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.854 [2024-05-15 00:53:32.789109] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:45.855 request: 00:16:45.855 { 00:16:45.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.855 "nsid": 2, 00:16:45.855 "host": "nqn.2016-06.io.spdk:host1", 00:16:45.855 "method": "nvmf_ns_remove_host", 00:16:45.855 "req_id": 1 00:16:45.855 } 00:16:45.855 Got JSON-RPC error response 00:16:45.855 response: 00:16:45.855 { 00:16:45.855 "code": -32602, 00:16:45.855 "message": "Invalid parameters" 00:16:45.855 } 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.855 [ 0]:0x2 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.855 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:46.172 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2dffa8f4292045cc9b1132b2f110b992 00:16:46.172 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2dffa8f4292045cc9b1132b2f110b992 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.172 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:46.172 00:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.172 00:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.430 rmmod nvme_tcp 00:16:46.430 rmmod nvme_fabrics 00:16:46.430 rmmod nvme_keyring 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3432905 ']' 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3432905 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3432905 ']' 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3432905 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3432905 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3432905' 00:16:46.430 killing process with pid 3432905 00:16:46.430 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3432905 00:16:46.431 [2024-05-15 00:53:33.372898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:46.431 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3432905 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.996 00:53:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.532 00:53:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.532 00:16:49.532 real 0m20.123s 00:16:49.532 user 0m49.355s 00:16:49.532 sys 0m5.916s 00:16:49.532 00:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:49.532 00:53:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:49.532 ************************************ 00:16:49.532 END TEST nvmf_ns_masking 00:16:49.532 ************************************ 00:16:49.532 00:53:36 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:49.532 00:53:36 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:49.532 00:53:36 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:49.532 00:53:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:49.532 00:53:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:49.532 00:53:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.532 ************************************ 00:16:49.532 START TEST nvmf_host_management 00:16:49.532 ************************************ 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:49.532 * Looking for test storage... 00:16:49.532 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.532 00:53:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.533 00:53:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:54.851 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:54.851 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:54.851 Found net devices under 0000:27:00.0: cvl_0_0 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:54.851 Found net devices under 0000:27:00.1: cvl_0_1 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:16:54.851 00:16:54.851 --- 10.0.0.2 ping statistics --- 00:16:54.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.851 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:16:54.851 00:16:54.851 --- 10.0.0.1 ping statistics --- 00:16:54.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.851 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3439158 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3439158 00:16:54.851 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3439158 ']' 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.852 00:53:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:55.117 [2024-05-15 00:53:41.928627] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:55.117 [2024-05-15 00:53:41.928729] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.117 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.117 [2024-05-15 00:53:42.073140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.378 [2024-05-15 00:53:42.233600] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.378 [2024-05-15 00:53:42.233654] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.378 [2024-05-15 00:53:42.233671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.378 [2024-05-15 00:53:42.233687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.378 [2024-05-15 00:53:42.233699] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.378 [2024-05-15 00:53:42.233854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.378 [2024-05-15 00:53:42.233976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.378 [2024-05-15 00:53:42.234117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.378 [2024-05-15 00:53:42.234141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.637 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.637 [2024-05-15 00:53:42.693717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.897 Malloc0 00:16:55.897 [2024-05-15 00:53:42.790911] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:55.897 [2024-05-15 00:53:42.791274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3439486 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3439486 /var/tmp/bdevperf.sock 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3439486 ']' 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:55.897 { 00:16:55.897 "params": { 00:16:55.897 "name": "Nvme$subsystem", 00:16:55.897 "trtype": "$TEST_TRANSPORT", 00:16:55.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.897 "adrfam": "ipv4", 00:16:55.897 "trsvcid": "$NVMF_PORT", 00:16:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.897 "hdgst": ${hdgst:-false}, 00:16:55.897 "ddgst": ${ddgst:-false} 00:16:55.897 }, 00:16:55.897 "method": "bdev_nvme_attach_controller" 00:16:55.897 } 00:16:55.897 EOF 00:16:55.897 )") 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:55.897 00:53:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:55.897 "params": { 00:16:55.897 "name": "Nvme0", 00:16:55.897 "trtype": "tcp", 00:16:55.897 "traddr": "10.0.0.2", 00:16:55.897 "adrfam": "ipv4", 00:16:55.897 "trsvcid": "4420", 00:16:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:55.897 "hdgst": false, 00:16:55.897 "ddgst": false 00:16:55.897 }, 00:16:55.897 "method": "bdev_nvme_attach_controller" 00:16:55.897 }' 00:16:55.897 [2024-05-15 00:53:42.927036] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:55.897 [2024-05-15 00:53:42.927183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439486 ] 00:16:56.155 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.155 [2024-05-15 00:53:43.055123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.155 [2024-05-15 00:53:43.146699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.412 Running I/O for 10 seconds... 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.673 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.673 [2024-05-15 00:53:43.700896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.700957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.673 [2024-05-15 00:53:43.701177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.673 [2024-05-15 00:53:43.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.674 [2024-05-15 00:53:43.701269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:56.674 [2024-05-15 00:53:43.701637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.674 [2024-05-15 00:53:43.701886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.674 [2024-05-15 00:53:43.701979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.674 [2024-05-15 00:53:43.701992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.675 [2024-05-15 00:53:43.702097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.675 [2024-05-15 00:53:43.702460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.675 [2024-05-15 00:53:43.702472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a4100 is same with the state(5) to be set 00:16:56.675 [2024-05-15 00:53:43.702628] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4100 was disconnected and freed. reset controller. 00:16:56.675 [2024-05-15 00:53:43.703912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:56.675 task offset: 81792 on job bdev=Nvme0n1 fails 00:16:56.675 00:16:56.675 Latency(us) 00:16:56.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.675 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.675 Job: Nvme0n1 ended in about 0.34 seconds with error 00:16:56.675 Verification LBA range: start 0x0 length 0x400 00:16:56.675 Nvme0n1 : 0.34 1711.54 106.97 190.17 0.00 32799.85 7381.42 30353.52 00:16:56.675 =================================================================================================================== 00:16:56.675 Total : 1711.54 106.97 190.17 0.00 32799.85 7381.42 30353.52 00:16:56.675 [2024-05-15 00:53:43.707417] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:56.675 [2024-05-15 00:53:43.707458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:16:56.675 00:53:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.675 00:53:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:56.933 [2024-05-15 00:53:43.756717] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:57.872 00:53:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3439486 00:16:57.872 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3439486) - No such process 00:16:57.872 00:53:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:57.872 00:53:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.873 { 00:16:57.873 "params": { 00:16:57.873 "name": "Nvme$subsystem", 00:16:57.873 "trtype": "$TEST_TRANSPORT", 00:16:57.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.873 "adrfam": "ipv4", 00:16:57.873 "trsvcid": "$NVMF_PORT", 00:16:57.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.873 "hdgst": ${hdgst:-false}, 00:16:57.873 "ddgst": ${ddgst:-false} 00:16:57.873 }, 00:16:57.873 "method": "bdev_nvme_attach_controller" 00:16:57.873 } 00:16:57.873 EOF 00:16:57.873 )") 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:57.873 00:53:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.873 "params": { 00:16:57.873 "name": "Nvme0", 00:16:57.873 "trtype": "tcp", 00:16:57.873 "traddr": "10.0.0.2", 00:16:57.873 "adrfam": "ipv4", 00:16:57.873 "trsvcid": "4420", 00:16:57.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:57.873 "hdgst": false, 00:16:57.873 "ddgst": false 00:16:57.873 }, 00:16:57.873 "method": "bdev_nvme_attach_controller" 00:16:57.873 }' 00:16:57.873 [2024-05-15 00:53:44.789401] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:57.873 [2024-05-15 00:53:44.789521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439808 ] 00:16:57.873 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.873 [2024-05-15 00:53:44.902408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.130 [2024-05-15 00:53:44.993599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.388 Running I/O for 1 seconds... 00:16:59.764 00:16:59.764 Latency(us) 00:16:59.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:59.764 Verification LBA range: start 0x0 length 0x400 00:16:59.764 Nvme0n1 : 1.01 2280.97 142.56 0.00 0.00 27640.36 6001.72 24006.87 00:16:59.764 =================================================================================================================== 00:16:59.764 Total : 2280.97 142.56 0.00 0.00 27640.36 6001.72 24006.87 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.764 rmmod nvme_tcp 00:16:59.764 rmmod nvme_fabrics 00:16:59.764 rmmod nvme_keyring 00:16:59.764 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3439158 ']' 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3439158 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3439158 ']' 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3439158 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3439158 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3439158' 00:17:00.024 killing process with pid 3439158 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3439158 00:17:00.024 [2024-05-15 00:53:46.868739] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.024 00:53:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3439158 00:17:00.284 [2024-05-15 00:53:47.321609] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.544 00:53:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.451 00:53:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.451 00:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:02.451 00:17:02.451 real 0m13.366s 00:17:02.451 user 0m24.521s 00:17:02.451 sys 0m5.441s 00:17:02.451 00:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.451 00:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.451 ************************************ 00:17:02.451 END TEST nvmf_host_management 00:17:02.451 ************************************ 00:17:02.451 00:53:49 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.451 00:53:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:02.451 00:53:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.451 00:53:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.709 ************************************ 00:17:02.709 START TEST nvmf_lvol 00:17:02.709 ************************************ 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.709 * Looking for test storage... 00:17:02.709 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.709 00:53:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.710 00:53:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:08.036 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:08.037 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:08.037 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:08.037 Found net devices under 0000:27:00.0: cvl_0_0 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:08.037 Found net devices under 0000:27:00.1: cvl_0_1 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:17:08.037 00:17:08.037 --- 10.0.0.2 ping statistics --- 00:17:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.037 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:08.037 00:17:08.037 --- 10.0.0.1 ping statistics --- 00:17:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.037 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3444235 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3444235 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3444235 ']' 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 00:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.037 [2024-05-15 00:53:54.790817] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:08.037 [2024-05-15 00:53:54.790921] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.037 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.037 [2024-05-15 00:53:54.908987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.037 [2024-05-15 00:53:55.002108] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.037 [2024-05-15 00:53:55.002145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.037 [2024-05-15 00:53:55.002155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.037 [2024-05-15 00:53:55.002164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.037 [2024-05-15 00:53:55.002171] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.037 [2024-05-15 00:53:55.002369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.037 [2024-05-15 00:53:55.002439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.037 [2024-05-15 00:53:55.002445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.608 00:53:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:08.608 [2024-05-15 00:53:55.663319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.868 00:53:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.869 00:53:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:08.869 00:53:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.127 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:09.127 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:09.386 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:09.386 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0634404-62a6-4b63-94a9-7e54ee991a5d 00:17:09.386 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0634404-62a6-4b63-94a9-7e54ee991a5d lvol 20 00:17:09.644 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ddb1a631-e01c-4310-bf02-6f0c55ba9b2a 00:17:09.644 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:09.644 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ddb1a631-e01c-4310-bf02-6f0c55ba9b2a 00:17:09.903 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.904 [2024-05-15 00:53:56.925334] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:09.904 [2024-05-15 00:53:56.925632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.904 00:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:10.164 00:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3444607 00:17:10.164 00:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:10.164 00:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:10.164 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.098 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ddb1a631-e01c-4310-bf02-6f0c55ba9b2a MY_SNAPSHOT 00:17:11.357 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0ac701f4-bb9f-4963-87bc-26d63427fdd9 00:17:11.358 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ddb1a631-e01c-4310-bf02-6f0c55ba9b2a 30 00:17:11.618 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0ac701f4-bb9f-4963-87bc-26d63427fdd9 MY_CLONE 00:17:11.618 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a67b1151-99d3-4392-bd6c-21fc1e84613e 00:17:11.618 00:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a67b1151-99d3-4392-bd6c-21fc1e84613e 00:17:12.189 00:53:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3444607 00:17:22.177 Initializing NVMe Controllers 00:17:22.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:22.177 Controller IO queue size 128, less than required. 00:17:22.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:22.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:22.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:22.177 Initialization complete. Launching workers. 00:17:22.177 ======================================================== 00:17:22.177 Latency(us) 00:17:22.177 Device Information : IOPS MiB/s Average min max 00:17:22.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13779.90 53.83 9289.83 226.64 79022.32 00:17:22.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13483.10 52.67 9494.95 2366.49 82552.21 00:17:22.177 ======================================================== 00:17:22.177 Total : 27262.99 106.50 9391.28 226.64 82552.21 00:17:22.177 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ddb1a631-e01c-4310-bf02-6f0c55ba9b2a 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0634404-62a6-4b63-94a9-7e54ee991a5d 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.177 rmmod nvme_tcp 00:17:22.177 rmmod nvme_fabrics 00:17:22.177 rmmod nvme_keyring 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3444235 ']' 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3444235 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3444235 ']' 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3444235 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.177 00:54:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3444235 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3444235' 00:17:22.177 killing process with pid 3444235 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3444235 00:17:22.177 [2024-05-15 00:54:08.024055] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3444235 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.177 00:54:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.082 00:17:24.082 real 0m21.155s 00:17:24.082 user 1m2.863s 00:17:24.082 sys 0m5.931s 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:24.082 ************************************ 00:17:24.082 END TEST nvmf_lvol 00:17:24.082 ************************************ 00:17:24.082 00:54:10 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:24.082 00:54:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:24.082 00:54:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.082 00:54:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.082 ************************************ 00:17:24.082 START TEST nvmf_lvs_grow 00:17:24.082 ************************************ 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:24.082 * Looking for test storage... 00:17:24.082 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.082 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.083 00:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:29.356 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:29.356 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:29.356 Found net devices under 0000:27:00.0: cvl_0_0 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:29.356 Found net devices under 0000:27:00.1: cvl_0_1 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.356 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:17:29.356 00:17:29.356 --- 10.0.0.2 ping statistics --- 00:17:29.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.356 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:29.357 00:17:29.357 --- 10.0.0.1 ping statistics --- 00:17:29.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.357 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3451248 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3451248 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3451248 ']' 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:29.357 00:54:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.357 [2024-05-15 00:54:16.383169] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:29.357 [2024-05-15 00:54:16.383270] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.615 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.615 [2024-05-15 00:54:16.505496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.615 [2024-05-15 00:54:16.601290] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.615 [2024-05-15 00:54:16.601329] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.615 [2024-05-15 00:54:16.601338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.615 [2024-05-15 00:54:16.601348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.615 [2024-05-15 00:54:16.601355] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.615 [2024-05-15 00:54:16.601387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.185 00:54:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.185 [2024-05-15 00:54:17.243777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.443 00:54:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:30.443 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:30.443 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.444 ************************************ 00:17:30.444 START TEST lvs_grow_clean 00:17:30.444 ************************************ 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.444 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.701 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:30.701 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:30.701 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:30.701 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:30.701 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 lvol 150 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=682b79f8-b107-4431-afd6-dd0a4d5148bd 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.962 00:54:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:31.220 [2024-05-15 00:54:18.039860] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:31.220 [2024-05-15 00:54:18.039929] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:31.220 true 00:17:31.220 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:31.220 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:31.220 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:31.220 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:31.480 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 682b79f8-b107-4431-afd6-dd0a4d5148bd 00:17:31.480 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:31.740 [2024-05-15 00:54:18.576035] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:31.740 [2024-05-15 00:54:18.576309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3451756 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3451756 /var/tmp/bdevperf.sock 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3451756 ']' 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.740 00:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:31.740 [2024-05-15 00:54:18.797587] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:31.740 [2024-05-15 00:54:18.797713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451756 ] 00:17:32.000 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.000 [2024-05-15 00:54:18.937149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.258 [2024-05-15 00:54:19.092690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.515 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.515 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:32.515 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:32.774 Nvme0n1 00:17:32.774 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:33.034 [ 00:17:33.034 { 00:17:33.034 "name": "Nvme0n1", 00:17:33.034 "aliases": [ 00:17:33.034 "682b79f8-b107-4431-afd6-dd0a4d5148bd" 00:17:33.034 ], 00:17:33.034 "product_name": "NVMe disk", 00:17:33.034 "block_size": 4096, 00:17:33.034 "num_blocks": 38912, 00:17:33.034 "uuid": "682b79f8-b107-4431-afd6-dd0a4d5148bd", 00:17:33.034 "assigned_rate_limits": { 00:17:33.034 "rw_ios_per_sec": 0, 00:17:33.034 "rw_mbytes_per_sec": 0, 00:17:33.034 "r_mbytes_per_sec": 0, 00:17:33.034 "w_mbytes_per_sec": 0 00:17:33.034 }, 00:17:33.034 "claimed": false, 00:17:33.034 "zoned": false, 00:17:33.034 "supported_io_types": { 00:17:33.034 "read": true, 00:17:33.034 "write": true, 00:17:33.034 "unmap": true, 00:17:33.034 "write_zeroes": true, 00:17:33.034 "flush": true, 00:17:33.034 "reset": true, 00:17:33.034 "compare": true, 00:17:33.034 "compare_and_write": true, 00:17:33.034 "abort": true, 00:17:33.034 "nvme_admin": true, 00:17:33.034 "nvme_io": true 00:17:33.034 }, 00:17:33.034 "memory_domains": [ 00:17:33.034 { 00:17:33.034 "dma_device_id": "system", 00:17:33.034 "dma_device_type": 1 00:17:33.034 } 00:17:33.034 ], 00:17:33.034 "driver_specific": { 00:17:33.034 "nvme": [ 00:17:33.034 { 00:17:33.034 "trid": { 00:17:33.034 "trtype": "TCP", 00:17:33.034 "adrfam": "IPv4", 00:17:33.034 "traddr": "10.0.0.2", 00:17:33.034 "trsvcid": "4420", 00:17:33.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.034 }, 00:17:33.034 "ctrlr_data": { 00:17:33.034 "cntlid": 1, 00:17:33.034 "vendor_id": "0x8086", 00:17:33.034 "model_number": "SPDK bdev Controller", 00:17:33.034 "serial_number": "SPDK0", 00:17:33.034 "firmware_revision": "24.05", 00:17:33.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.034 "oacs": { 00:17:33.034 "security": 0, 00:17:33.034 "format": 0, 00:17:33.034 "firmware": 0, 00:17:33.034 "ns_manage": 0 00:17:33.034 }, 00:17:33.034 "multi_ctrlr": true, 00:17:33.034 "ana_reporting": false 00:17:33.034 }, 00:17:33.034 "vs": { 00:17:33.034 "nvme_version": "1.3" 00:17:33.034 }, 00:17:33.034 "ns_data": { 00:17:33.034 "id": 1, 00:17:33.034 "can_share": true 00:17:33.034 } 00:17:33.034 } 00:17:33.034 ], 00:17:33.034 "mp_policy": "active_passive" 00:17:33.034 } 00:17:33.034 } 00:17:33.034 ] 00:17:33.034 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.034 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3452052 00:17:33.034 00:54:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:33.034 Running I/O for 10 seconds... 00:17:33.972 Latency(us) 00:17:33.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.972 Nvme0n1 : 1.00 23132.00 90.36 0.00 0.00 0.00 0.00 0.00 00:17:33.972 =================================================================================================================== 00:17:33.972 Total : 23132.00 90.36 0.00 0.00 0.00 0.00 0.00 00:17:33.972 00:17:34.906 00:54:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:35.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.165 Nvme0n1 : 2.00 23089.00 90.19 0.00 0.00 0.00 0.00 0.00 00:17:35.165 =================================================================================================================== 00:17:35.165 Total : 23089.00 90.19 0.00 0.00 0.00 0.00 0.00 00:17:35.165 00:17:35.165 true 00:17:35.165 00:54:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:35.166 00:54:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:35.425 00:54:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:35.425 00:54:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:35.425 00:54:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3452052 00:17:35.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.991 Nvme0n1 : 3.00 22971.33 89.73 0.00 0.00 0.00 0.00 0.00 00:17:35.991 =================================================================================================================== 00:17:35.991 Total : 22971.33 89.73 0.00 0.00 0.00 0.00 0.00 00:17:35.991 00:17:37.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.370 Nvme0n1 : 4.00 22982.50 89.78 0.00 0.00 0.00 0.00 0.00 00:17:37.370 =================================================================================================================== 00:17:37.370 Total : 22982.50 89.78 0.00 0.00 0.00 0.00 0.00 00:17:37.370 00:17:37.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.937 Nvme0n1 : 5.00 22998.80 89.84 0.00 0.00 0.00 0.00 0.00 00:17:37.937 =================================================================================================================== 00:17:37.937 Total : 22998.80 89.84 0.00 0.00 0.00 0.00 0.00 00:17:37.937 00:17:39.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.314 Nvme0n1 : 6.00 22988.33 89.80 0.00 0.00 0.00 0.00 0.00 00:17:39.314 =================================================================================================================== 00:17:39.314 Total : 22988.33 89.80 0.00 0.00 0.00 0.00 0.00 00:17:39.315 00:17:40.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.251 Nvme0n1 : 7.00 23010.57 89.89 0.00 0.00 0.00 0.00 0.00 00:17:40.251 =================================================================================================================== 00:17:40.251 Total : 23010.57 89.89 0.00 0.00 0.00 0.00 0.00 00:17:40.251 00:17:41.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.190 Nvme0n1 : 8.00 23028.25 89.95 0.00 0.00 0.00 0.00 0.00 00:17:41.190 =================================================================================================================== 00:17:41.190 Total : 23028.25 89.95 0.00 0.00 0.00 0.00 0.00 00:17:41.190 00:17:42.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.132 Nvme0n1 : 9.00 23037.56 89.99 0.00 0.00 0.00 0.00 0.00 00:17:42.132 =================================================================================================================== 00:17:42.132 Total : 23037.56 89.99 0.00 0.00 0.00 0.00 0.00 00:17:42.132 00:17:43.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.069 Nvme0n1 : 10.00 23029.00 89.96 0.00 0.00 0.00 0.00 0.00 00:17:43.069 =================================================================================================================== 00:17:43.069 Total : 23029.00 89.96 0.00 0.00 0.00 0.00 0.00 00:17:43.069 00:17:43.069 00:17:43.069 Latency(us) 00:17:43.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.069 Nvme0n1 : 10.01 23028.76 89.96 0.00 0.00 5553.61 2949.12 12831.26 00:17:43.069 =================================================================================================================== 00:17:43.069 Total : 23028.76 89.96 0.00 0.00 5553.61 2949.12 12831.26 00:17:43.069 0 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3451756 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3451756 ']' 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3451756 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3451756 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3451756' 00:17:43.069 killing process with pid 3451756 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3451756 00:17:43.069 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.069 00:17:43.069 Latency(us) 00:17:43.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.069 =================================================================================================================== 00:17:43.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.069 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3451756 00:17:43.640 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.640 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:43.900 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:43.901 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:43.901 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:43.901 00:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:44.160 [2024-05-15 00:54:31.016608] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:44.160 request: 00:17:44.160 { 00:17:44.160 "uuid": "ddbe845f-d2db-4fc1-8880-653fd6967ea9", 00:17:44.160 "method": "bdev_lvol_get_lvstores", 00:17:44.160 "req_id": 1 00:17:44.160 } 00:17:44.160 Got JSON-RPC error response 00:17:44.160 response: 00:17:44.160 { 00:17:44.160 "code": -19, 00:17:44.160 "message": "No such device" 00:17:44.160 } 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.160 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:44.419 aio_bdev 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 682b79f8-b107-4431-afd6-dd0a4d5148bd 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=682b79f8-b107-4431-afd6-dd0a4d5148bd 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.419 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 682b79f8-b107-4431-afd6-dd0a4d5148bd -t 2000 00:17:44.678 [ 00:17:44.678 { 00:17:44.678 "name": "682b79f8-b107-4431-afd6-dd0a4d5148bd", 00:17:44.678 "aliases": [ 00:17:44.678 "lvs/lvol" 00:17:44.678 ], 00:17:44.678 "product_name": "Logical Volume", 00:17:44.678 "block_size": 4096, 00:17:44.678 "num_blocks": 38912, 00:17:44.678 "uuid": "682b79f8-b107-4431-afd6-dd0a4d5148bd", 00:17:44.678 "assigned_rate_limits": { 00:17:44.678 "rw_ios_per_sec": 0, 00:17:44.678 "rw_mbytes_per_sec": 0, 00:17:44.678 "r_mbytes_per_sec": 0, 00:17:44.678 "w_mbytes_per_sec": 0 00:17:44.678 }, 00:17:44.678 "claimed": false, 00:17:44.678 "zoned": false, 00:17:44.678 "supported_io_types": { 00:17:44.678 "read": true, 00:17:44.678 "write": true, 00:17:44.678 "unmap": true, 00:17:44.678 "write_zeroes": true, 00:17:44.678 "flush": false, 00:17:44.678 "reset": true, 00:17:44.678 "compare": false, 00:17:44.678 "compare_and_write": false, 00:17:44.678 "abort": false, 00:17:44.678 "nvme_admin": false, 00:17:44.678 "nvme_io": false 00:17:44.678 }, 00:17:44.678 "driver_specific": { 00:17:44.678 "lvol": { 00:17:44.678 "lvol_store_uuid": "ddbe845f-d2db-4fc1-8880-653fd6967ea9", 00:17:44.678 "base_bdev": "aio_bdev", 00:17:44.678 "thin_provision": false, 00:17:44.678 "num_allocated_clusters": 38, 00:17:44.678 "snapshot": false, 00:17:44.678 "clone": false, 00:17:44.678 "esnap_clone": false 00:17:44.678 } 00:17:44.678 } 00:17:44.678 } 00:17:44.678 ] 00:17:44.678 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:44.678 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:44.678 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:44.939 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:44.939 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:44.939 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:44.939 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:44.940 00:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 682b79f8-b107-4431-afd6-dd0a4d5148bd 00:17:45.201 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ddbe845f-d2db-4fc1-8880-653fd6967ea9 00:17:45.201 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:45.462 00:17:45.462 real 0m15.002s 00:17:45.462 user 0m14.548s 00:17:45.462 sys 0m1.264s 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:45.462 ************************************ 00:17:45.462 END TEST lvs_grow_clean 00:17:45.462 ************************************ 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:45.462 ************************************ 00:17:45.462 START TEST lvs_grow_dirty 00:17:45.462 ************************************ 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:45.462 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.721 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:45.721 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:45.721 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:45.721 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:45.721 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea lvol 150 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.055 00:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:46.055 [2024-05-15 00:54:33.079851] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:46.055 [2024-05-15 00:54:33.079917] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:46.055 true 00:17:46.346 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:46.346 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:46.346 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:46.346 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:46.346 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:17:46.613 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:46.613 [2024-05-15 00:54:33.624251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.613 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3454770 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3454770 /var/tmp/bdevperf.sock 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3454770 ']' 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:46.877 00:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:46.877 [2024-05-15 00:54:33.849994] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:46.877 [2024-05-15 00:54:33.850135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454770 ] 00:17:46.877 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.137 [2024-05-15 00:54:33.966055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.137 [2024-05-15 00:54:34.056474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.703 00:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.703 00:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:47.703 00:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:47.961 Nvme0n1 00:17:47.961 00:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:48.221 [ 00:17:48.221 { 00:17:48.221 "name": "Nvme0n1", 00:17:48.221 "aliases": [ 00:17:48.221 "dad5e68c-d1e3-4e3b-bf97-1053490db05c" 00:17:48.221 ], 00:17:48.221 "product_name": "NVMe disk", 00:17:48.221 "block_size": 4096, 00:17:48.221 "num_blocks": 38912, 00:17:48.221 "uuid": "dad5e68c-d1e3-4e3b-bf97-1053490db05c", 00:17:48.221 "assigned_rate_limits": { 00:17:48.221 "rw_ios_per_sec": 0, 00:17:48.221 "rw_mbytes_per_sec": 0, 00:17:48.221 "r_mbytes_per_sec": 0, 00:17:48.221 "w_mbytes_per_sec": 0 00:17:48.221 }, 00:17:48.221 "claimed": false, 00:17:48.221 "zoned": false, 00:17:48.221 "supported_io_types": { 00:17:48.221 "read": true, 00:17:48.221 "write": true, 00:17:48.221 "unmap": true, 00:17:48.221 "write_zeroes": true, 00:17:48.221 "flush": true, 00:17:48.221 "reset": true, 00:17:48.221 "compare": true, 00:17:48.221 "compare_and_write": true, 00:17:48.221 "abort": true, 00:17:48.221 "nvme_admin": true, 00:17:48.221 "nvme_io": true 00:17:48.221 }, 00:17:48.221 "memory_domains": [ 00:17:48.221 { 00:17:48.221 "dma_device_id": "system", 00:17:48.221 "dma_device_type": 1 00:17:48.221 } 00:17:48.221 ], 00:17:48.221 "driver_specific": { 00:17:48.221 "nvme": [ 00:17:48.221 { 00:17:48.221 "trid": { 00:17:48.221 "trtype": "TCP", 00:17:48.221 "adrfam": "IPv4", 00:17:48.221 "traddr": "10.0.0.2", 00:17:48.221 "trsvcid": "4420", 00:17:48.221 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:48.221 }, 00:17:48.221 "ctrlr_data": { 00:17:48.221 "cntlid": 1, 00:17:48.221 "vendor_id": "0x8086", 00:17:48.221 "model_number": "SPDK bdev Controller", 00:17:48.221 "serial_number": "SPDK0", 00:17:48.221 "firmware_revision": "24.05", 00:17:48.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.221 "oacs": { 00:17:48.221 "security": 0, 00:17:48.221 "format": 0, 00:17:48.221 "firmware": 0, 00:17:48.221 "ns_manage": 0 00:17:48.221 }, 00:17:48.221 "multi_ctrlr": true, 00:17:48.221 "ana_reporting": false 00:17:48.221 }, 00:17:48.221 "vs": { 00:17:48.221 "nvme_version": "1.3" 00:17:48.221 }, 00:17:48.221 "ns_data": { 00:17:48.221 "id": 1, 00:17:48.221 "can_share": true 00:17:48.221 } 00:17:48.221 } 00:17:48.221 ], 00:17:48.221 "mp_policy": "active_passive" 00:17:48.221 } 00:17:48.221 } 00:17:48.221 ] 00:17:48.221 00:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3455071 00:17:48.221 00:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:48.221 00:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:48.221 Running I/O for 10 seconds... 00:17:49.156 Latency(us) 00:17:49.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.157 Nvme0n1 : 1.00 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:17:49.157 =================================================================================================================== 00:17:49.157 Total : 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:17:49.157 00:17:50.089 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:50.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.347 Nvme0n1 : 2.00 23480.50 91.72 0.00 0.00 0.00 0.00 0.00 00:17:50.347 =================================================================================================================== 00:17:50.347 Total : 23480.50 91.72 0.00 0.00 0.00 0.00 0.00 00:17:50.347 00:17:50.347 true 00:17:50.347 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:50.347 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:50.606 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:50.606 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:50.606 00:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3455071 00:17:51.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.170 Nvme0n1 : 3.00 23522.67 91.89 0.00 0.00 0.00 0.00 0.00 00:17:51.170 =================================================================================================================== 00:17:51.170 Total : 23522.67 91.89 0.00 0.00 0.00 0.00 0.00 00:17:51.170 00:17:52.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.553 Nvme0n1 : 4.00 23579.75 92.11 0.00 0.00 0.00 0.00 0.00 00:17:52.553 =================================================================================================================== 00:17:52.553 Total : 23579.75 92.11 0.00 0.00 0.00 0.00 0.00 00:17:52.553 00:17:53.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.123 Nvme0n1 : 5.00 23578.00 92.10 0.00 0.00 0.00 0.00 0.00 00:17:53.123 =================================================================================================================== 00:17:53.123 Total : 23578.00 92.10 0.00 0.00 0.00 0.00 0.00 00:17:53.123 00:17:54.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.500 Nvme0n1 : 6.00 23606.50 92.21 0.00 0.00 0.00 0.00 0.00 00:17:54.500 =================================================================================================================== 00:17:54.500 Total : 23606.50 92.21 0.00 0.00 0.00 0.00 0.00 00:17:54.500 00:17:55.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.434 Nvme0n1 : 7.00 23637.00 92.33 0.00 0.00 0.00 0.00 0.00 00:17:55.434 =================================================================================================================== 00:17:55.434 Total : 23637.00 92.33 0.00 0.00 0.00 0.00 0.00 00:17:55.434 00:17:56.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.372 Nvme0n1 : 8.00 23664.12 92.44 0.00 0.00 0.00 0.00 0.00 00:17:56.372 =================================================================================================================== 00:17:56.372 Total : 23664.12 92.44 0.00 0.00 0.00 0.00 0.00 00:17:56.372 00:17:57.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.304 Nvme0n1 : 9.00 23687.67 92.53 0.00 0.00 0.00 0.00 0.00 00:17:57.304 =================================================================================================================== 00:17:57.304 Total : 23687.67 92.53 0.00 0.00 0.00 0.00 0.00 00:17:57.304 00:17:58.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.240 Nvme0n1 : 10.00 23696.10 92.56 0.00 0.00 0.00 0.00 0.00 00:17:58.240 =================================================================================================================== 00:17:58.240 Total : 23696.10 92.56 0.00 0.00 0.00 0.00 0.00 00:17:58.240 00:17:58.240 00:17:58.240 Latency(us) 00:17:58.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.240 Nvme0n1 : 10.01 23694.71 92.56 0.00 0.00 5398.09 3345.79 13728.07 00:17:58.240 =================================================================================================================== 00:17:58.240 Total : 23694.71 92.56 0.00 0.00 5398.09 3345.79 13728.07 00:17:58.240 0 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3454770 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3454770 ']' 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3454770 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3454770 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3454770' 00:17:58.240 killing process with pid 3454770 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3454770 00:17:58.240 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.240 00:17:58.240 Latency(us) 00:17:58.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.240 =================================================================================================================== 00:17:58.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.240 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3454770 00:17:58.809 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:58.809 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:59.068 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:17:59.068 00:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3451248 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3451248 00:17:59.068 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3451248 Killed "${NVMF_APP[@]}" "$@" 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3457144 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3457144 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3457144 ']' 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:59.068 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:59.326 [2024-05-15 00:54:46.188717] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:59.326 [2024-05-15 00:54:46.188834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.326 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.326 [2024-05-15 00:54:46.320139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.583 [2024-05-15 00:54:46.415515] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.583 [2024-05-15 00:54:46.415552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.583 [2024-05-15 00:54:46.415561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.583 [2024-05-15 00:54:46.415571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.583 [2024-05-15 00:54:46.415578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.583 [2024-05-15 00:54:46.415610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.841 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:59.841 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:59.841 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.841 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.841 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:00.101 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.101 00:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:00.101 [2024-05-15 00:54:47.030446] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:00.101 [2024-05-15 00:54:47.030567] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:00.101 [2024-05-15 00:54:47.030596] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:00.101 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:00.361 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dad5e68c-d1e3-4e3b-bf97-1053490db05c -t 2000 00:18:00.361 [ 00:18:00.361 { 00:18:00.361 "name": "dad5e68c-d1e3-4e3b-bf97-1053490db05c", 00:18:00.361 "aliases": [ 00:18:00.361 "lvs/lvol" 00:18:00.361 ], 00:18:00.361 "product_name": "Logical Volume", 00:18:00.361 "block_size": 4096, 00:18:00.361 "num_blocks": 38912, 00:18:00.361 "uuid": "dad5e68c-d1e3-4e3b-bf97-1053490db05c", 00:18:00.361 "assigned_rate_limits": { 00:18:00.361 "rw_ios_per_sec": 0, 00:18:00.361 "rw_mbytes_per_sec": 0, 00:18:00.361 "r_mbytes_per_sec": 0, 00:18:00.361 "w_mbytes_per_sec": 0 00:18:00.361 }, 00:18:00.361 "claimed": false, 00:18:00.361 "zoned": false, 00:18:00.361 "supported_io_types": { 00:18:00.361 "read": true, 00:18:00.361 "write": true, 00:18:00.361 "unmap": true, 00:18:00.361 "write_zeroes": true, 00:18:00.361 "flush": false, 00:18:00.361 "reset": true, 00:18:00.361 "compare": false, 00:18:00.361 "compare_and_write": false, 00:18:00.361 "abort": false, 00:18:00.361 "nvme_admin": false, 00:18:00.361 "nvme_io": false 00:18:00.361 }, 00:18:00.361 "driver_specific": { 00:18:00.361 "lvol": { 00:18:00.361 "lvol_store_uuid": "5cb70e61-cc52-45c7-937d-3e20b9f4aaea", 00:18:00.361 "base_bdev": "aio_bdev", 00:18:00.361 "thin_provision": false, 00:18:00.361 "num_allocated_clusters": 38, 00:18:00.361 "snapshot": false, 00:18:00.361 "clone": false, 00:18:00.361 "esnap_clone": false 00:18:00.361 } 00:18:00.361 } 00:18:00.361 } 00:18:00.361 ] 00:18:00.361 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:00.361 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:00.361 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:00.621 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:00.621 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:00.621 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:00.621 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:00.621 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:00.882 [2024-05-15 00:54:47.736784] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:00.882 request: 00:18:00.882 { 00:18:00.882 "uuid": "5cb70e61-cc52-45c7-937d-3e20b9f4aaea", 00:18:00.882 "method": "bdev_lvol_get_lvstores", 00:18:00.882 "req_id": 1 00:18:00.882 } 00:18:00.882 Got JSON-RPC error response 00:18:00.882 response: 00:18:00.882 { 00:18:00.882 "code": -19, 00:18:00.882 "message": "No such device" 00:18:00.882 } 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:00.882 00:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:01.140 aio_bdev 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:01.140 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dad5e68c-d1e3-4e3b-bf97-1053490db05c -t 2000 00:18:01.397 [ 00:18:01.397 { 00:18:01.397 "name": "dad5e68c-d1e3-4e3b-bf97-1053490db05c", 00:18:01.397 "aliases": [ 00:18:01.397 "lvs/lvol" 00:18:01.397 ], 00:18:01.398 "product_name": "Logical Volume", 00:18:01.398 "block_size": 4096, 00:18:01.398 "num_blocks": 38912, 00:18:01.398 "uuid": "dad5e68c-d1e3-4e3b-bf97-1053490db05c", 00:18:01.398 "assigned_rate_limits": { 00:18:01.398 "rw_ios_per_sec": 0, 00:18:01.398 "rw_mbytes_per_sec": 0, 00:18:01.398 "r_mbytes_per_sec": 0, 00:18:01.398 "w_mbytes_per_sec": 0 00:18:01.398 }, 00:18:01.398 "claimed": false, 00:18:01.398 "zoned": false, 00:18:01.398 "supported_io_types": { 00:18:01.398 "read": true, 00:18:01.398 "write": true, 00:18:01.398 "unmap": true, 00:18:01.398 "write_zeroes": true, 00:18:01.398 "flush": false, 00:18:01.398 "reset": true, 00:18:01.398 "compare": false, 00:18:01.398 "compare_and_write": false, 00:18:01.398 "abort": false, 00:18:01.398 "nvme_admin": false, 00:18:01.398 "nvme_io": false 00:18:01.398 }, 00:18:01.398 "driver_specific": { 00:18:01.398 "lvol": { 00:18:01.398 "lvol_store_uuid": "5cb70e61-cc52-45c7-937d-3e20b9f4aaea", 00:18:01.398 "base_bdev": "aio_bdev", 00:18:01.398 "thin_provision": false, 00:18:01.398 "num_allocated_clusters": 38, 00:18:01.398 "snapshot": false, 00:18:01.398 "clone": false, 00:18:01.398 "esnap_clone": false 00:18:01.398 } 00:18:01.398 } 00:18:01.398 } 00:18:01.398 ] 00:18:01.398 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:01.398 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:01.398 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:01.657 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:01.657 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:01.657 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:01.657 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:01.657 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dad5e68c-d1e3-4e3b-bf97-1053490db05c 00:18:01.918 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5cb70e61-cc52-45c7-937d-3e20b9f4aaea 00:18:01.918 00:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.177 00:18:02.177 real 0m16.662s 00:18:02.177 user 0m43.411s 00:18:02.177 sys 0m3.049s 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.177 ************************************ 00:18:02.177 END TEST lvs_grow_dirty 00:18:02.177 ************************************ 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:02.177 nvmf_trace.0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.177 rmmod nvme_tcp 00:18:02.177 rmmod nvme_fabrics 00:18:02.177 rmmod nvme_keyring 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3457144 ']' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3457144 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3457144 ']' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3457144 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:02.177 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3457144 00:18:02.437 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:02.437 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:02.437 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3457144' 00:18:02.437 killing process with pid 3457144 00:18:02.437 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3457144 00:18:02.437 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3457144 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.695 00:54:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.278 00:54:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.278 00:18:05.278 real 0m41.043s 00:18:05.278 user 1m3.211s 00:18:05.278 sys 0m8.878s 00:18:05.278 00:54:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:05.278 00:54:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.278 ************************************ 00:18:05.278 END TEST nvmf_lvs_grow 00:18:05.278 ************************************ 00:18:05.278 00:54:51 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:05.278 00:54:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:05.278 00:54:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.278 00:54:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.278 ************************************ 00:18:05.278 START TEST nvmf_bdev_io_wait 00:18:05.278 ************************************ 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:05.278 * Looking for test storage... 00:18:05.278 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.278 00:54:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:10.546 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:10.546 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.546 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:10.547 Found net devices under 0000:27:00.0: cvl_0_0 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:10.547 Found net devices under 0000:27:00.1: cvl_0_1 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:18:10.547 00:18:10.547 --- 10.0.0.2 ping statistics --- 00:18:10.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.547 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:18:10.547 00:18:10.547 --- 10.0.0.1 ping statistics --- 00:18:10.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.547 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.547 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3461787 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3461787 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3461787 ']' 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:10.807 00:54:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:10.807 [2024-05-15 00:54:57.715631] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:10.807 [2024-05-15 00:54:57.715768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.807 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.807 [2024-05-15 00:54:57.861612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.065 [2024-05-15 00:54:57.970472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.065 [2024-05-15 00:54:57.970518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.065 [2024-05-15 00:54:57.970528] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.065 [2024-05-15 00:54:57.970538] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.065 [2024-05-15 00:54:57.970546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.065 [2024-05-15 00:54:57.970665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.065 [2024-05-15 00:54:57.970767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.065 [2024-05-15 00:54:57.970782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.065 [2024-05-15 00:54:57.970784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 [2024-05-15 00:54:58.554049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 Malloc0 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.631 [2024-05-15 00:54:58.642489] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:11.631 [2024-05-15 00:54:58.642745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3462022 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3462024 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3462025 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3462027 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.631 { 00:18:11.631 "params": { 00:18:11.631 "name": "Nvme$subsystem", 00:18:11.631 "trtype": "$TEST_TRANSPORT", 00:18:11.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.631 "adrfam": "ipv4", 00:18:11.631 "trsvcid": "$NVMF_PORT", 00:18:11.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.631 "hdgst": ${hdgst:-false}, 00:18:11.631 "ddgst": ${ddgst:-false} 00:18:11.631 }, 00:18:11.631 "method": "bdev_nvme_attach_controller" 00:18:11.631 } 00:18:11.631 EOF 00:18:11.631 )") 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.631 { 00:18:11.631 "params": { 00:18:11.631 "name": "Nvme$subsystem", 00:18:11.631 "trtype": "$TEST_TRANSPORT", 00:18:11.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.631 "adrfam": "ipv4", 00:18:11.631 "trsvcid": "$NVMF_PORT", 00:18:11.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.631 "hdgst": ${hdgst:-false}, 00:18:11.631 "ddgst": ${ddgst:-false} 00:18:11.631 }, 00:18:11.631 "method": "bdev_nvme_attach_controller" 00:18:11.631 } 00:18:11.631 EOF 00:18:11.631 )") 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.631 { 00:18:11.631 "params": { 00:18:11.631 "name": "Nvme$subsystem", 00:18:11.631 "trtype": "$TEST_TRANSPORT", 00:18:11.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.631 "adrfam": "ipv4", 00:18:11.631 "trsvcid": "$NVMF_PORT", 00:18:11.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.631 "hdgst": ${hdgst:-false}, 00:18:11.631 "ddgst": ${ddgst:-false} 00:18:11.631 }, 00:18:11.631 "method": "bdev_nvme_attach_controller" 00:18:11.631 } 00:18:11.631 EOF 00:18:11.631 )") 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.631 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.632 { 00:18:11.632 "params": { 00:18:11.632 "name": "Nvme$subsystem", 00:18:11.632 "trtype": "$TEST_TRANSPORT", 00:18:11.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.632 "adrfam": "ipv4", 00:18:11.632 "trsvcid": "$NVMF_PORT", 00:18:11.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.632 "hdgst": ${hdgst:-false}, 00:18:11.632 "ddgst": ${ddgst:-false} 00:18:11.632 }, 00:18:11.632 "method": "bdev_nvme_attach_controller" 00:18:11.632 } 00:18:11.632 EOF 00:18:11.632 )") 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3462022 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.632 "params": { 00:18:11.632 "name": "Nvme1", 00:18:11.632 "trtype": "tcp", 00:18:11.632 "traddr": "10.0.0.2", 00:18:11.632 "adrfam": "ipv4", 00:18:11.632 "trsvcid": "4420", 00:18:11.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.632 "hdgst": false, 00:18:11.632 "ddgst": false 00:18:11.632 }, 00:18:11.632 "method": "bdev_nvme_attach_controller" 00:18:11.632 }' 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.632 "params": { 00:18:11.632 "name": "Nvme1", 00:18:11.632 "trtype": "tcp", 00:18:11.632 "traddr": "10.0.0.2", 00:18:11.632 "adrfam": "ipv4", 00:18:11.632 "trsvcid": "4420", 00:18:11.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.632 "hdgst": false, 00:18:11.632 "ddgst": false 00:18:11.632 }, 00:18:11.632 "method": "bdev_nvme_attach_controller" 00:18:11.632 }' 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.632 "params": { 00:18:11.632 "name": "Nvme1", 00:18:11.632 "trtype": "tcp", 00:18:11.632 "traddr": "10.0.0.2", 00:18:11.632 "adrfam": "ipv4", 00:18:11.632 "trsvcid": "4420", 00:18:11.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.632 "hdgst": false, 00:18:11.632 "ddgst": false 00:18:11.632 }, 00:18:11.632 "method": "bdev_nvme_attach_controller" 00:18:11.632 }' 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:11.632 00:54:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.632 "params": { 00:18:11.632 "name": "Nvme1", 00:18:11.632 "trtype": "tcp", 00:18:11.632 "traddr": "10.0.0.2", 00:18:11.632 "adrfam": "ipv4", 00:18:11.632 "trsvcid": "4420", 00:18:11.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.632 "hdgst": false, 00:18:11.632 "ddgst": false 00:18:11.632 }, 00:18:11.632 "method": "bdev_nvme_attach_controller" 00:18:11.632 }' 00:18:11.891 [2024-05-15 00:54:58.697318] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:11.891 [2024-05-15 00:54:58.697400] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:11.891 [2024-05-15 00:54:58.715308] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:11.891 [2024-05-15 00:54:58.715418] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:11.891 [2024-05-15 00:54:58.718937] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:11.891 [2024-05-15 00:54:58.719051] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:11.891 [2024-05-15 00:54:58.720018] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:11.891 [2024-05-15 00:54:58.720128] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:11.891 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.891 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.891 [2024-05-15 00:54:58.867923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.892 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.892 [2024-05-15 00:54:58.930522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.892 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.151 [2024-05-15 00:54:59.002667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.151 [2024-05-15 00:54:59.007902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:12.151 [2024-05-15 00:54:59.049591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.151 [2024-05-15 00:54:59.068484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:12.151 [2024-05-15 00:54:59.143289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:12.151 [2024-05-15 00:54:59.177745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:12.410 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:12.668 Running I/O for 1 seconds... 00:18:13.602 00:18:13.602 Latency(us) 00:18:13.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.602 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:13.602 Nvme1n1 : 1.02 8434.74 32.95 0.00 0.00 14953.46 6036.21 32285.10 00:18:13.602 =================================================================================================================== 00:18:13.602 Total : 8434.74 32.95 0.00 0.00 14953.46 6036.21 32285.10 00:18:13.602 00:18:13.602 Latency(us) 00:18:13.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.602 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:13.602 Nvme1n1 : 1.01 6911.64 27.00 0.00 0.00 18420.30 10209.82 29663.66 00:18:13.602 =================================================================================================================== 00:18:13.602 Total : 6911.64 27.00 0.00 0.00 18420.30 10209.82 29663.66 00:18:13.602 00:18:13.602 Latency(us) 00:18:13.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.602 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:13.602 Nvme1n1 : 1.00 132682.72 518.29 0.00 0.00 960.81 226.36 1819.49 00:18:13.602 =================================================================================================================== 00:18:13.602 Total : 132682.72 518.29 0.00 0.00 960.81 226.36 1819.49 00:18:13.602 00:18:13.602 Latency(us) 00:18:13.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.602 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:13.602 Nvme1n1 : 1.00 8031.36 31.37 0.00 0.00 15902.06 3156.08 45254.33 00:18:13.602 =================================================================================================================== 00:18:13.602 Total : 8031.36 31.37 0.00 0.00 15902.06 3156.08 45254.33 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3462024 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3462025 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3462027 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.174 rmmod nvme_tcp 00:18:14.174 rmmod nvme_fabrics 00:18:14.174 rmmod nvme_keyring 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3461787 ']' 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3461787 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3461787 ']' 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3461787 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.174 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3461787 00:18:14.433 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.433 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.433 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3461787' 00:18:14.433 killing process with pid 3461787 00:18:14.433 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3461787 00:18:14.433 [2024-05-15 00:55:01.251364] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:14.433 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3461787 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.691 00:55:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.229 00:55:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.229 00:18:17.229 real 0m11.903s 00:18:17.229 user 0m23.631s 00:18:17.229 sys 0m5.899s 00:18:17.229 00:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:17.229 00:55:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.229 ************************************ 00:18:17.229 END TEST nvmf_bdev_io_wait 00:18:17.229 ************************************ 00:18:17.229 00:55:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:17.229 00:55:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:17.229 00:55:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:17.229 00:55:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.229 ************************************ 00:18:17.229 START TEST nvmf_queue_depth 00:18:17.229 ************************************ 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:17.229 * Looking for test storage... 00:18:17.229 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:17.229 00:55:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.230 00:55:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:22.500 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:22.500 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:22.500 Found net devices under 0000:27:00.0: cvl_0_0 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:22.500 Found net devices under 0000:27:00.1: cvl_0_1 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.500 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.501 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.501 00:55:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:18:22.501 00:18:22.501 --- 10.0.0.2 ping statistics --- 00:18:22.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.501 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:18:22.501 00:18:22.501 --- 10.0.0.1 ping statistics --- 00:18:22.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.501 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3466517 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3466517 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3466517 ']' 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:22.501 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.501 [2024-05-15 00:55:09.179157] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:22.501 [2024-05-15 00:55:09.179272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.501 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.501 [2024-05-15 00:55:09.333687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.501 [2024-05-15 00:55:09.501894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.501 [2024-05-15 00:55:09.501965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.501 [2024-05-15 00:55:09.501982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.501 [2024-05-15 00:55:09.501997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.501 [2024-05-15 00:55:09.502010] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.501 [2024-05-15 00:55:09.502066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 [2024-05-15 00:55:09.931386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.068 00:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 Malloc0 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 [2024-05-15 00:55:10.024903] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:23.068 [2024-05-15 00:55:10.025272] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3466680 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3466680 /var/tmp/bdevperf.sock 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3466680 ']' 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.068 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 [2024-05-15 00:55:10.097180] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:23.068 [2024-05-15 00:55:10.097289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466680 ] 00:18:23.326 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.326 [2024-05-15 00:55:10.211680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.326 [2024-05-15 00:55:10.303462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:23.896 NVMe0n1 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.896 00:55:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:24.156 Running I/O for 10 seconds... 00:18:34.131 00:18:34.131 Latency(us) 00:18:34.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.131 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:34.131 Verification LBA range: start 0x0 length 0x4000 00:18:34.131 NVMe0n1 : 10.05 12445.81 48.62 0.00 0.00 81995.85 15383.71 52152.86 00:18:34.131 =================================================================================================================== 00:18:34.131 Total : 12445.81 48.62 0.00 0.00 81995.85 15383.71 52152.86 00:18:34.131 0 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3466680 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3466680 ']' 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3466680 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3466680 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3466680' 00:18:34.131 killing process with pid 3466680 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3466680 00:18:34.131 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.131 00:18:34.131 Latency(us) 00:18:34.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.131 =================================================================================================================== 00:18:34.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.131 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3466680 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.726 rmmod nvme_tcp 00:18:34.726 rmmod nvme_fabrics 00:18:34.726 rmmod nvme_keyring 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3466517 ']' 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3466517 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3466517 ']' 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3466517 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3466517 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3466517' 00:18:34.726 killing process with pid 3466517 00:18:34.726 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3466517 00:18:34.726 [2024-05-15 00:55:21.606094] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:34.727 00:55:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3466517 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.294 00:55:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.198 00:55:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.198 00:18:37.198 real 0m20.370s 00:18:37.198 user 0m25.209s 00:18:37.198 sys 0m5.216s 00:18:37.198 00:55:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.198 00:55:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.198 ************************************ 00:18:37.198 END TEST nvmf_queue_depth 00:18:37.198 ************************************ 00:18:37.198 00:55:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:37.198 00:55:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:37.198 00:55:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:37.198 00:55:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.198 ************************************ 00:18:37.198 START TEST nvmf_target_multipath 00:18:37.198 ************************************ 00:18:37.198 00:55:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:37.458 * Looking for test storage... 00:18:37.458 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.458 00:55:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:42.815 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.815 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:42.816 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:42.816 Found net devices under 0000:27:00.0: cvl_0_0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:42.816 Found net devices under 0000:27:00.1: cvl_0_1 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:18:42.816 00:18:42.816 --- 10.0.0.2 ping statistics --- 00:18:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.816 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:42.816 00:18:42.816 --- 10.0.0.1 ping statistics --- 00:18:42.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.816 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:42.816 only one NIC for nvmf test 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.816 rmmod nvme_tcp 00:18:42.816 rmmod nvme_fabrics 00:18:42.816 rmmod nvme_keyring 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.816 00:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.349 00:55:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.350 00:55:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.350 00:55:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.350 00:18:45.350 real 0m7.700s 00:18:45.350 user 0m1.644s 00:18:45.350 sys 0m3.946s 00:18:45.350 00:55:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.350 00:55:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:45.350 ************************************ 00:18:45.350 END TEST nvmf_target_multipath 00:18:45.350 ************************************ 00:18:45.350 00:55:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:45.350 00:55:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.350 00:55:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.350 00:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.350 ************************************ 00:18:45.350 START TEST nvmf_zcopy 00:18:45.350 ************************************ 00:18:45.350 00:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:45.350 * Looking for test storage... 00:18:45.350 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.350 00:55:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:50.624 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:50.624 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:50.624 Found net devices under 0000:27:00.0: cvl_0_0 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:50.624 Found net devices under 0000:27:00.1: cvl_0_1 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.624 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:18:50.625 00:18:50.625 --- 10.0.0.2 ping statistics --- 00:18:50.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.625 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:18:50.625 00:18:50.625 --- 10.0.0.1 ping statistics --- 00:18:50.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.625 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3476653 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3476653 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3476653 ']' 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.625 00:55:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.625 [2024-05-15 00:55:37.496401] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:50.625 [2024-05-15 00:55:37.496505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.625 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.625 [2024-05-15 00:55:37.637767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.884 [2024-05-15 00:55:37.794689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.884 [2024-05-15 00:55:37.794745] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.884 [2024-05-15 00:55:37.794760] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.884 [2024-05-15 00:55:37.794777] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.884 [2024-05-15 00:55:37.794790] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.884 [2024-05-15 00:55:37.794832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 [2024-05-15 00:55:38.256583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 [2024-05-15 00:55:38.272481] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:51.455 [2024-05-15 00:55:38.272982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 malloc0 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:51.455 { 00:18:51.455 "params": { 00:18:51.455 "name": "Nvme$subsystem", 00:18:51.455 "trtype": "$TEST_TRANSPORT", 00:18:51.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.455 "adrfam": "ipv4", 00:18:51.455 "trsvcid": "$NVMF_PORT", 00:18:51.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.455 "hdgst": ${hdgst:-false}, 00:18:51.455 "ddgst": ${ddgst:-false} 00:18:51.455 }, 00:18:51.455 "method": "bdev_nvme_attach_controller" 00:18:51.455 } 00:18:51.455 EOF 00:18:51.455 )") 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:51.455 00:55:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:51.455 "params": { 00:18:51.455 "name": "Nvme1", 00:18:51.455 "trtype": "tcp", 00:18:51.455 "traddr": "10.0.0.2", 00:18:51.455 "adrfam": "ipv4", 00:18:51.455 "trsvcid": "4420", 00:18:51.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.455 "hdgst": false, 00:18:51.455 "ddgst": false 00:18:51.455 }, 00:18:51.455 "method": "bdev_nvme_attach_controller" 00:18:51.455 }' 00:18:51.455 [2024-05-15 00:55:38.424601] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:51.455 [2024-05-15 00:55:38.424733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476917 ] 00:18:51.455 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.717 [2024-05-15 00:55:38.552953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.717 [2024-05-15 00:55:38.657487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.283 Running I/O for 10 seconds... 00:19:02.261 00:19:02.261 Latency(us) 00:19:02.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:02.261 Verification LBA range: start 0x0 length 0x1000 00:19:02.261 Nvme1n1 : 10.01 8001.01 62.51 0.00 0.00 15957.55 2638.69 36148.28 00:19:02.261 =================================================================================================================== 00:19:02.261 Total : 8001.01 62.51 0.00 0.00 15957.55 2638.69 36148.28 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3479030 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:02.522 [2024-05-15 00:55:49.468071] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.468121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:02.522 { 00:19:02.522 "params": { 00:19:02.522 "name": "Nvme$subsystem", 00:19:02.522 "trtype": "$TEST_TRANSPORT", 00:19:02.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.522 "adrfam": "ipv4", 00:19:02.522 "trsvcid": "$NVMF_PORT", 00:19:02.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.522 "hdgst": ${hdgst:-false}, 00:19:02.522 "ddgst": ${ddgst:-false} 00:19:02.522 }, 00:19:02.522 "method": "bdev_nvme_attach_controller" 00:19:02.522 } 00:19:02.522 EOF 00:19:02.522 )") 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:02.522 [2024-05-15 00:55:49.475963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.475984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:02.522 00:55:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.522 "params": { 00:19:02.522 "name": "Nvme1", 00:19:02.522 "trtype": "tcp", 00:19:02.522 "traddr": "10.0.0.2", 00:19:02.522 "adrfam": "ipv4", 00:19:02.522 "trsvcid": "4420", 00:19:02.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.522 "hdgst": false, 00:19:02.522 "ddgst": false 00:19:02.522 }, 00:19:02.522 "method": "bdev_nvme_attach_controller" 00:19:02.522 }' 00:19:02.522 [2024-05-15 00:55:49.483964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.483983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.491959] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.491975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.499949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.499965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.507957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.507972] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.515959] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.515973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.523952] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.523967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.531965] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.531981] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.537829] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:02.522 [2024-05-15 00:55:49.537942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479030 ] 00:19:02.522 [2024-05-15 00:55:49.539954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.539968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.547972] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.547987] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.555964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.555978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.563971] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.563986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.571982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.571997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.522 [2024-05-15 00:55:49.579973] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.522 [2024-05-15 00:55:49.579988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.587968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.587983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.595981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.595996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.603972] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.603986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.783 [2024-05-15 00:55:49.611992] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.612007] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.619986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.620000] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.627977] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.627991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.635989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.636003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.643995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.644009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.650996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.783 [2024-05-15 00:55:49.651991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.652005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.660000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.660014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.668021] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.668035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.676007] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.676022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.684006] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.684020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.692000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.692016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.700013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.700027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.708020] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.708035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.716005] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.716019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.724018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.724033] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.732010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.732024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.740022] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.740036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.742562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.783 [2024-05-15 00:55:49.748026] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.748040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.756019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.756032] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.764028] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.764043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.772078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.772092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.780021] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.780036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.788035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.788055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.796026] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.796041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.804051] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.804066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.812038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.812057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.820040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.820058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.828056] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.828070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.836052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.783 [2024-05-15 00:55:49.836066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.783 [2024-05-15 00:55:49.844051] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.784 [2024-05-15 00:55:49.844064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.852055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.852070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.860054] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.860068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.868062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.868079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.876068] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.876082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.884055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.884069] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.892084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.892108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.900092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.900111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.908070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.908089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.916098] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.916122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.924084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.924104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.932097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.932119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.940091] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.940110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.948148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.948173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.956138] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.956164] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 Running I/O for 5 seconds... 00:19:03.043 [2024-05-15 00:55:49.964128] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.964145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.976705] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.976734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.987836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.987865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:49.996844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:49.996871] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.004434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.004465] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.016372] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.016402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.026943] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.026973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.037183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.037217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.046361] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.046390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.055545] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.055572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.065052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.065081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.072595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.072621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.082881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.082909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.092092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.092117] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.043 [2024-05-15 00:55:50.101493] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.043 [2024-05-15 00:55:50.101519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.110569] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.110598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.119694] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.119721] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.129197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.129224] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.138386] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.138413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.148024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.148056] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.157116] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.157142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.166677] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.166702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.175751] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.175777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.185396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.185421] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.193942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.301 [2024-05-15 00:55:50.193987] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.301 [2024-05-15 00:55:50.201191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.201223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.212300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.212330] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.220898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.220923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.230158] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.230183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.239112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.239140] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.248858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.248885] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.257460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.257488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.266389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.266416] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.275867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.275893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.284335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.284361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.293999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.294026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.303216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.303242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.312298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.312324] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.321277] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.321305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.330318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.330345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.339810] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.339837] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.348966] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.348994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.302 [2024-05-15 00:55:50.358490] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.302 [2024-05-15 00:55:50.358515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.367697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.367725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.376935] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.376961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.386338] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.386363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.395455] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.395481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.403993] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.404018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.413701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.413727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.422929] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.422955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.432040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.432073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.441609] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.441635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.450753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.450778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.460320] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.460345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.469524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.469549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.478645] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.478672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.488136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.488162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.497411] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.497440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.506467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.506493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.516038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.516074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.560 [2024-05-15 00:55:50.525156] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.560 [2024-05-15 00:55:50.525184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.534732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.534760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.544101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.544125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.553282] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.553310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.562214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.562240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.571942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.571969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.581173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.581200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.590684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.590710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.599692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.599718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.608743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.608769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.561 [2024-05-15 00:55:50.617597] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.561 [2024-05-15 00:55:50.617623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.820 [2024-05-15 00:55:50.627096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.627126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.636254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.636280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.645782] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.645810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.655174] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.655199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.663798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.663824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.673274] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.673300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.682369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.682395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.692050] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.692076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.701282] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.701308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.710878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.710903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.720084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.720110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.729898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.729924] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.739148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.739175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.748028] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.748058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.757550] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.757576] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.766095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.766123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.775167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.775192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.784408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.784440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.793979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.794004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.803166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.803191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.812189] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.812216] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.821553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.821578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.830944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.830971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.839490] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.839516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.848450] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.848476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.858156] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.858181] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.867339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.867363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:03.821 [2024-05-15 00:55:50.876886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:03.821 [2024-05-15 00:55:50.876911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.885989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.886014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.895585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.895612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.904081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.904109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.913263] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.913288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.922376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.922400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.931833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.931860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.941105] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.941131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.950283] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.950308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.959344] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.081 [2024-05-15 00:55:50.959369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.081 [2024-05-15 00:55:50.968496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:50.968524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:50.977575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:50.977601] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:50.987178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:50.987202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:50.995870] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:50.995895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.004787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.004812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.014217] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.014244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.023819] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.023844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.032909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.032936] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.042224] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.042251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.051626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.051651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.060820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.060846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.069913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.069937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.078970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.079001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.088610] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.088635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.098061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.098089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.107555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.107582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.116213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.116241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.125350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.125377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.082 [2024-05-15 00:55:51.134831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.082 [2024-05-15 00:55:51.134857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.144038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.144071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.153187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.153213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.162254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.162284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.171686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.171713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.180939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.180968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.190790] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.190816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.199905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.199930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.209263] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.209289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.218497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.218522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.228354] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.228379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.236917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.236943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.246055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.246081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.255353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.255385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.264296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.341 [2024-05-15 00:55:51.264322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.341 [2024-05-15 00:55:51.273320] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.273347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.282295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.282321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.291161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.291186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.300344] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.300370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.309531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.309556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.319010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.319035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.328689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.328714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.337882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.337908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.347433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.347459] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.356515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.356542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.366096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.366122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.375829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.375859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.384955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.384982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.393981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.394006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.342 [2024-05-15 00:55:51.403300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.342 [2024-05-15 00:55:51.403327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.600 [2024-05-15 00:55:51.413014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.600 [2024-05-15 00:55:51.413041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.600 [2024-05-15 00:55:51.422208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.600 [2024-05-15 00:55:51.422235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.600 [2024-05-15 00:55:51.431786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.600 [2024-05-15 00:55:51.431817] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.600 [2024-05-15 00:55:51.441399] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.600 [2024-05-15 00:55:51.441427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.600 [2024-05-15 00:55:51.450664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.600 [2024-05-15 00:55:51.450690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.459932] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.459959] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.469372] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.469398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.478437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.478462] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.487426] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.487453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.496851] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.496878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.505924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.505949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.515365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.515391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.524970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.524996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.533888] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.533912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.542856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.542883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.552720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.552745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.562018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.562048] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.571588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.571613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.581399] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.581425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.590975] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.591002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.600217] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.600243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.610150] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.610179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.618898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.618923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.628585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.628612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.638337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.638365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.647028] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.647063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.601 [2024-05-15 00:55:51.655990] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.601 [2024-05-15 00:55:51.656016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.665615] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.665643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.675049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.675074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.683878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.683904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.693289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.693317] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.703076] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.703100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.712325] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.712353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.721348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.721375] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.730991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.731016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.740109] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.740133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.749139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.749166] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.758390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.758418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.767930] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.767957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.777517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.777544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.787223] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.787249] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.796541] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.796567] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.805574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.805601] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.814751] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.814776] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.823793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.823818] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.833488] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.833517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.842505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.842530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.852201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.852228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.861434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.861461] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.870752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.870778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.879738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.879765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.888821] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.888845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.898564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.898591] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.907756] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.907781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.860 [2024-05-15 00:55:51.916640] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.860 [2024-05-15 00:55:51.916668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.121 [2024-05-15 00:55:51.926345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.121 [2024-05-15 00:55:51.926373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.121 [2024-05-15 00:55:51.935538] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.121 [2024-05-15 00:55:51.935566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.121 [2024-05-15 00:55:51.944056] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.121 [2024-05-15 00:55:51.944083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.121 [2024-05-15 00:55:51.953656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.121 [2024-05-15 00:55:51.953684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.121 [2024-05-15 00:55:51.962848] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.121 [2024-05-15 00:55:51.962874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:51.972063] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:51.972097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:51.981725] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:51.981751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:51.990122] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:51.990150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:51.999401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:51.999427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.008913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.008940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.018177] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.018204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.026692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.026718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.036417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.036442] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.044900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.044927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.054370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.054395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.064001] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.064028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.072418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.072443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.081535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.081562] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.090140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.090166] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.100185] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.100213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.109211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.109240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.118867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.118894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.127924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.127949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.137140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.137166] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.146274] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.146298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.155474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.155498] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.164584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.164612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.122 [2024-05-15 00:55:52.174269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.122 [2024-05-15 00:55:52.174297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.183700] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.183727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.192844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.192872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.202136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.202162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.211135] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.211160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.220321] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.220347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.229498] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.229522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.238944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.238972] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.248039] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.248069] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.256981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.257006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.265969] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.265996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.275481] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.275506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.284872] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.284899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.293949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.293976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.302995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.303025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.311971] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.311996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.321591] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.321619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.330582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.330609] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.340101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.340127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.349236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.349262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.358731] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.358758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.367703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.367728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.376824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.376852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.386428] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.386453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.394951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.394976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.404613] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.404637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.413806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.413833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.422966] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.422992] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.383 [2024-05-15 00:55:52.432058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.383 [2024-05-15 00:55:52.432084] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.384 [2024-05-15 00:55:52.441668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.384 [2024-05-15 00:55:52.441693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.451353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.451379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.459999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.460024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.469276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.469300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.478629] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.478656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.487916] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.487941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.496790] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.496815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.506501] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.506526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.515639] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.515665] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.525263] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.525287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.534896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.534923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.544067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.544093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.553235] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.553262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.562280] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.562311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.571537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.571562] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.581040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.581071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.590129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.590155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.599173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.599198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.608780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.608807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.618482] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.618508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.627652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.627678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.636818] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.636844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.645838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.645866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.655014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.655050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.664218] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.664245] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.673706] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.673733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.682773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.682797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.692527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.692554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.646 [2024-05-15 00:55:52.701859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.646 [2024-05-15 00:55:52.701884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.710945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.710972] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.720402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.720427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.729636] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.729663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.739127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.739152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.748758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.748783] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.757188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.757212] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.766359] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.766383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.775859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.775885] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.785507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.785532] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.794670] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.794696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.804202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.804228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.812564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.812589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.821547] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.821573] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.830796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.830825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.840067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.840092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.849199] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.849223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.858648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.858673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.868226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.868251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.877928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.877953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.887193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.887219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.896749] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.896774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.905807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.905830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.915247] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.915274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.924425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.924451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.934117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.934142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.943074] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.943100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.952121] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.952146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.906 [2024-05-15 00:55:52.961666] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.906 [2024-05-15 00:55:52.961690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:52.970753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:52.970778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:52.980506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:52.980530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:52.989608] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:52.989633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:52.998935] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:52.998960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.008342] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.008372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.018077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.018102] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.027861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.027887] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.037542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.037568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.046175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.046200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.055789] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.055814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.065335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.065359] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.074437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.074464] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.083471] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.083496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.093078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.093105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.102336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.102363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.112065] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.112091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.121197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.121225] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.130085] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.130111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.138501] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.138526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.147733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.147760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.157936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.157972] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.167250] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.167275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.176901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.176925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.186519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.186552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.195998] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.196023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.205072] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.205097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.213977] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.214005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.165 [2024-05-15 00:55:53.223074] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.165 [2024-05-15 00:55:53.223100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.232714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.232741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.242239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.242266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.251524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.251551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.260491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.260517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.270322] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.270350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.280058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.280087] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.289787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.289812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.299346] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.299370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.308545] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.308574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.317640] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.317667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.326805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.326830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.336074] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.336101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.345093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.345119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.354180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.354205] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.363466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.363492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.373103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.373129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.382252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.382280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.391377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.391404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.401031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.401066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.410709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.410734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.419719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.419745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.429019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.429053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.438831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.438855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.448587] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.448614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.457887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.457913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.467018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.467050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.476553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.476579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.424 [2024-05-15 00:55:53.485638] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.424 [2024-05-15 00:55:53.485663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.495364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.495389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.504489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.504517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.513677] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.513703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.522571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.522599] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.532336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.532361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.541468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.541495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.550905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.550930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.559444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.559471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.569089] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.569117] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.578215] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.578241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.587345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.587370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.601244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.601269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.609951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.609978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.619679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.619705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.628356] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.628382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.637401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.637426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.646973] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.646999] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.656647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.656672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.665915] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.665940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.675593] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.675619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.685418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.685444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.694595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.694621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.703732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.703758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.712936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.712967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.721534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.721559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.730716] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.730742] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.684 [2024-05-15 00:55:53.739839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.684 [2024-05-15 00:55:53.739863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.749644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.749672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.759559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.759589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.768219] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.768246] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.777832] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.777858] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.787546] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.787573] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.796781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.796809] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.805974] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.805999] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.815049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.815075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.824412] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.824438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.834125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.834151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.843815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.944 [2024-05-15 00:55:53.843839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.944 [2024-05-15 00:55:53.853517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.853541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.863228] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.863252] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.872268] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.872291] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.881205] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.881232] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.890761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.890790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.899963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.899989] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.909348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.909373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.918585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.918612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.927737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.927762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.936764] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.936791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.946272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.946297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.955995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.956022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.964526] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.964553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.973709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.973735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.982627] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.982653] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:53.991603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:53.991628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.945 [2024-05-15 00:55:54.000583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.945 [2024-05-15 00:55:54.000610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.010116] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.010143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.019693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.019720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.029299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.029325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.038555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.038582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.047671] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.047695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.057147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.057172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.066771] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.066801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.075888] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.075915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.084978] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.085004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.094404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.094430] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.103093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.103119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.112279] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.112306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.121866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.121891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.131597] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.131625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.140787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.140814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.150272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.150298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.159378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.159405] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.168934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.168959] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.178126] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.178153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.187444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.187469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.196369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.196396] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.205899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.205925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.215291] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.215318] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.224815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.224840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.234654] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.234681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.243869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.243901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.253620] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.253645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-05-15 00:55:54.263322] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-05-15 00:55:54.263348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.272560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.272587] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.281853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.281880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.291364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.291389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.300615] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.300642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.309668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.309693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.318932] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.318962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.327851] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.327879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.337102] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.337128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.346279] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.346305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.355382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.355412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.363905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.363930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.373466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.373491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.382559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.382586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.390928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.466 [2024-05-15 00:55:54.390953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.466 [2024-05-15 00:55:54.400467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.400494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.409513] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.409538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.418630] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.418661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.428077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.428102] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.437145] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.437172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.446166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.446192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.455655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.455684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.465227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.465254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.474431] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.474457] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.483411] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.483438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.492371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.492397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.501980] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.502008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.511023] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.511052] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.467 [2024-05-15 00:55:54.520504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.467 [2024-05-15 00:55:54.520531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.529672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.529698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.539394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.539422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.548942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.548967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.558109] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.558135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.567527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.567552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.577231] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.577260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.586246] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.586272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.595890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.595919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.604957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.604983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.614697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.614726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.624301] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.624327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.632854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.632883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.641937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.641964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.725 [2024-05-15 00:55:54.651387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.725 [2024-05-15 00:55:54.651417] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.659398] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.659427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.669464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.669492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.678155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.678182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.687254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.687280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.696757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.696785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.706519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.706544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.716083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.716110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.724584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.724609] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.734306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.734332] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.743244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.743268] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.752369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.752395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.761230] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.761255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.769736] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.769762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.726 [2024-05-15 00:55:54.778752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.726 [2024-05-15 00:55:54.778777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.788237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.788269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.797423] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.797450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.806381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.806407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.815347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.815374] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.824419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.824445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.833683] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.833710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.842617] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.842643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.852329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.852357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.861453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.861480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.870668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.870695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.880232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.880258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.889773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.889799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.898873] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.898899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.908546] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.908572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.917793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.917820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.926752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.926779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.935657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.935683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.945140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.945171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.954348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.954375] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.963796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.963822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.970072] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.970095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 00:19:07.984 Latency(us) 00:19:07.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.984 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:07.984 Nvme1n1 : 5.01 17261.23 134.85 0.00 0.00 7408.81 3294.05 18212.11 00:19:07.984 =================================================================================================================== 00:19:07.984 Total : 17261.23 134.85 0.00 0.00 7408.81 3294.05 18212.11 00:19:07.984 [2024-05-15 00:55:54.978095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.978119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.986081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.986103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:54.994059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:54.994075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.002082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.002097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.010080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.010097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.018063] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.018078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.026077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.026093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.034067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.034082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.984 [2024-05-15 00:55:55.042078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.984 [2024-05-15 00:55:55.042093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.050079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.050095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.058070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.058085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.066082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.066099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.074086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.074101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.082077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.082091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.090090] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.090105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.098080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.098095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.106095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.106111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.114088] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.114103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.122091] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.122106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.130100] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.130115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.138092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.138107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.146092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.146106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.154097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.154111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.162094] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.162107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.170102] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.170115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.178114] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.178131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.186101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.186116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.194111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.194125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.202117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.202132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.210106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.210120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.218119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.218137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.226111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.226126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.234120] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.234135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.242117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.242130] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.250115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.250129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.242 [2024-05-15 00:55:55.258125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.242 [2024-05-15 00:55:55.258138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 00:55:55.266124] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 00:55:55.266138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 00:55:55.274124] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 00:55:55.274140] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 00:55:55.282132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 00:55:55.282146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 00:55:55.290253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 00:55:55.290267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.243 [2024-05-15 00:55:55.298143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.243 [2024-05-15 00:55:55.298159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.502 [2024-05-15 00:55:55.306136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.502 [2024-05-15 00:55:55.306151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.502 [2024-05-15 00:55:55.314129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.502 [2024-05-15 00:55:55.314143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.502 [2024-05-15 00:55:55.322142] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.502 [2024-05-15 00:55:55.322156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.502 [2024-05-15 00:55:55.330145] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.502 [2024-05-15 00:55:55.330160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.502 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3479030) - No such process 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3479030 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.502 delay0 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.502 00:55:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:08.502 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.502 [2024-05-15 00:55:55.485900] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:15.110 Initializing NVMe Controllers 00:19:15.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.110 Initialization complete. Launching workers. 00:19:15.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1307 00:19:15.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1581, failed to submit 46 00:19:15.110 success 1405, unsuccess 176, failed 0 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.110 rmmod nvme_tcp 00:19:15.110 rmmod nvme_fabrics 00:19:15.110 rmmod nvme_keyring 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3476653 ']' 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3476653 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3476653 ']' 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3476653 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3476653 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3476653' 00:19:15.110 killing process with pid 3476653 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3476653 00:19:15.110 [2024-05-15 00:56:01.941161] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:15.110 00:56:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3476653 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.369 00:56:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.905 00:56:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.905 00:19:17.905 real 0m32.481s 00:19:17.905 user 0m45.805s 00:19:17.905 sys 0m8.101s 00:19:17.905 00:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:17.905 00:56:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:17.905 ************************************ 00:19:17.905 END TEST nvmf_zcopy 00:19:17.905 ************************************ 00:19:17.905 00:56:04 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:17.905 00:56:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:17.905 00:56:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.905 00:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:17.905 ************************************ 00:19:17.905 START TEST nvmf_nmic 00:19:17.905 ************************************ 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:17.905 * Looking for test storage... 00:19:17.905 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.905 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:17.906 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.906 00:56:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.906 00:56:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:23.178 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:23.178 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:23.178 Found net devices under 0000:27:00.0: cvl_0_0 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:23.178 Found net devices under 0000:27:00.1: cvl_0_1 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:19:23.178 00:19:23.178 --- 10.0.0.2 ping statistics --- 00:19:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.178 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:19:23.178 00:19:23.178 --- 10.0.0.1 ping statistics --- 00:19:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.178 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3485268 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3485268 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3485268 ']' 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.178 00:56:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.178 [2024-05-15 00:56:09.926171] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:23.178 [2024-05-15 00:56:09.926278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.178 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.178 [2024-05-15 00:56:10.055059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.179 [2024-05-15 00:56:10.162539] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.179 [2024-05-15 00:56:10.162584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.179 [2024-05-15 00:56:10.162593] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.179 [2024-05-15 00:56:10.162602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.179 [2024-05-15 00:56:10.162610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.179 [2024-05-15 00:56:10.162679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.179 [2024-05-15 00:56:10.162792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.179 [2024-05-15 00:56:10.162904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.179 [2024-05-15 00:56:10.162913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 [2024-05-15 00:56:10.646632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 Malloc0 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.746 [2024-05-15 00:56:10.706454] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:23.746 [2024-05-15 00:56:10.706712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:23.746 test case1: single bdev can't be used in multiple subsystems 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.746 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 [2024-05-15 00:56:10.730521] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:23.747 [2024-05-15 00:56:10.730549] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:23.747 [2024-05-15 00:56:10.730560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.747 request: 00:19:23.747 { 00:19:23.747 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:23.747 "namespace": { 00:19:23.747 "bdev_name": "Malloc0", 00:19:23.747 "no_auto_visible": false 00:19:23.747 }, 00:19:23.747 "method": "nvmf_subsystem_add_ns", 00:19:23.747 "req_id": 1 00:19:23.747 } 00:19:23.747 Got JSON-RPC error response 00:19:23.747 response: 00:19:23.747 { 00:19:23.747 "code": -32602, 00:19:23.747 "message": "Invalid parameters" 00:19:23.747 } 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:23.747 Adding namespace failed - expected result. 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:23.747 test case2: host connect to nvmf target in multiple paths 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 [2024-05-15 00:56:10.738643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.747 00:56:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.653 00:56:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:27.036 00:56:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:27.036 00:56:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:27.036 00:56:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.036 00:56:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:27.036 00:56:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:28.950 00:56:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:28.950 [global] 00:19:28.950 thread=1 00:19:28.950 invalidate=1 00:19:28.950 rw=write 00:19:28.950 time_based=1 00:19:28.950 runtime=1 00:19:28.950 ioengine=libaio 00:19:28.950 direct=1 00:19:28.950 bs=4096 00:19:28.950 iodepth=1 00:19:28.950 norandommap=0 00:19:28.950 numjobs=1 00:19:28.950 00:19:28.950 verify_dump=1 00:19:28.950 verify_backlog=512 00:19:28.950 verify_state_save=0 00:19:28.950 do_verify=1 00:19:28.950 verify=crc32c-intel 00:19:28.950 [job0] 00:19:28.950 filename=/dev/nvme0n1 00:19:28.950 Could not set queue depth (nvme0n1) 00:19:29.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.209 fio-3.35 00:19:29.209 Starting 1 thread 00:19:30.584 00:19:30.584 job0: (groupid=0, jobs=1): err= 0: pid=3486642: Wed May 15 00:56:17 2024 00:19:30.584 read: IOPS=2789, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:19:30.584 slat (nsec): min=3788, max=45145, avg=7658.66, stdev=4863.81 00:19:30.584 clat (usec): min=150, max=693, avg=200.42, stdev=41.39 00:19:30.584 lat (usec): min=156, max=738, avg=208.08, stdev=45.14 00:19:30.584 clat percentiles (usec): 00:19:30.584 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 182], 00:19:30.584 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:19:30.584 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 237], 95.00th=[ 314], 00:19:30.584 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 562], 00:19:30.584 | 99.99th=[ 693] 00:19:30.584 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:19:30.584 slat (nsec): min=4886, max=49355, avg=9292.32, stdev=3720.07 00:19:30.584 clat (usec): min=99, max=598, avg=122.60, stdev=33.17 00:19:30.584 lat (usec): min=109, max=647, avg=131.90, stdev=35.93 00:19:30.584 clat percentiles (usec): 00:19:30.584 | 1.00th=[ 105], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 110], 00:19:30.584 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 112], 60.00th=[ 113], 00:19:30.584 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 172], 95.00th=[ 204], 00:19:30.584 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:19:30.584 | 99.99th=[ 603] 00:19:30.584 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:19:30.584 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:30.584 lat (usec) : 100=0.02%, 250=94.92%, 500=5.01%, 750=0.05% 00:19:30.584 cpu : usr=2.20%, sys=4.70%, ctx=5864, majf=0, minf=1 00:19:30.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.585 issued rwts: total=2792,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.585 00:19:30.585 Run status group 0 (all jobs): 00:19:30.585 READ: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:19:30.585 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:19:30.585 00:19:30.585 Disk stats (read/write): 00:19:30.585 nvme0n1: ios=2610/2649, merge=0/0, ticks=522/331, in_queue=853, util=91.48% 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.585 rmmod nvme_tcp 00:19:30.585 rmmod nvme_fabrics 00:19:30.585 rmmod nvme_keyring 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3485268 ']' 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3485268 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3485268 ']' 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3485268 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3485268 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3485268' 00:19:30.585 killing process with pid 3485268 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3485268 00:19:30.585 [2024-05-15 00:56:17.621410] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:30.585 00:56:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3485268 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.153 00:56:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.693 00:56:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.693 00:19:33.693 real 0m15.673s 00:19:33.693 user 0m47.172s 00:19:33.693 sys 0m4.583s 00:19:33.693 00:56:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:33.693 00:56:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:33.693 ************************************ 00:19:33.693 END TEST nvmf_nmic 00:19:33.693 ************************************ 00:19:33.693 00:56:20 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:33.693 00:56:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:33.693 00:56:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:33.693 00:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:33.693 ************************************ 00:19:33.693 START TEST nvmf_fio_target 00:19:33.693 ************************************ 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:33.693 * Looking for test storage... 00:19:33.693 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.693 00:56:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:40.261 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:40.261 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:40.261 Found net devices under 0000:27:00.0: cvl_0_0 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:40.261 Found net devices under 0000:27:00.1: cvl_0_1 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.261 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:19:40.262 00:19:40.262 --- 10.0.0.2 ping statistics --- 00:19:40.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.262 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:19:40.262 00:19:40.262 --- 10.0.0.1 ping statistics --- 00:19:40.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.262 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3491103 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3491103 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3491103 ']' 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:40.262 00:56:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.262 [2024-05-15 00:56:26.718759] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:40.262 [2024-05-15 00:56:26.718863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.262 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.262 [2024-05-15 00:56:26.841609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.262 [2024-05-15 00:56:26.936189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.262 [2024-05-15 00:56:26.936226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.262 [2024-05-15 00:56:26.936235] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.262 [2024-05-15 00:56:26.936244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.262 [2024-05-15 00:56:26.936251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.262 [2024-05-15 00:56:26.936442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.262 [2024-05-15 00:56:26.936550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.262 [2024-05-15 00:56:26.936664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.262 [2024-05-15 00:56:26.936674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.522 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:40.782 [2024-05-15 00:56:27.603990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.782 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.782 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:40.782 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.040 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:41.040 00:56:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.297 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:41.297 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.297 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:41.297 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:41.556 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.817 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:41.817 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.817 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:41.817 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.076 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:42.076 00:56:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:42.337 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:42.337 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:42.337 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.596 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:42.596 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.596 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.854 [2024-05-15 00:56:29.733459] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:42.854 [2024-05-15 00:56:29.733767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.854 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:42.854 00:56:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:43.112 00:56:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:44.556 00:56:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:46.458 00:56:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:46.458 [global] 00:19:46.458 thread=1 00:19:46.458 invalidate=1 00:19:46.458 rw=write 00:19:46.458 time_based=1 00:19:46.458 runtime=1 00:19:46.458 ioengine=libaio 00:19:46.458 direct=1 00:19:46.458 bs=4096 00:19:46.458 iodepth=1 00:19:46.458 norandommap=0 00:19:46.458 numjobs=1 00:19:46.458 00:19:46.458 verify_dump=1 00:19:46.458 verify_backlog=512 00:19:46.458 verify_state_save=0 00:19:46.458 do_verify=1 00:19:46.458 verify=crc32c-intel 00:19:46.458 [job0] 00:19:46.458 filename=/dev/nvme0n1 00:19:46.458 [job1] 00:19:46.458 filename=/dev/nvme0n2 00:19:46.458 [job2] 00:19:46.458 filename=/dev/nvme0n3 00:19:46.458 [job3] 00:19:46.458 filename=/dev/nvme0n4 00:19:46.726 Could not set queue depth (nvme0n1) 00:19:46.726 Could not set queue depth (nvme0n2) 00:19:46.726 Could not set queue depth (nvme0n3) 00:19:46.726 Could not set queue depth (nvme0n4) 00:19:46.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.985 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.985 fio-3.35 00:19:46.985 Starting 4 threads 00:19:48.367 00:19:48.367 job0: (groupid=0, jobs=1): err= 0: pid=3492810: Wed May 15 00:56:35 2024 00:19:48.367 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:48.367 slat (nsec): min=5052, max=66205, avg=18919.49, stdev=13381.83 00:19:48.367 clat (usec): min=182, max=799, avg=338.21, stdev=127.54 00:19:48.367 lat (usec): min=189, max=830, avg=357.13, stdev=134.46 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 243], 00:19:48.367 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 322], 00:19:48.367 | 70.00th=[ 347], 80.00th=[ 441], 90.00th=[ 562], 95.00th=[ 635], 00:19:48.367 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 758], 99.95th=[ 799], 00:19:48.367 | 99.99th=[ 799] 00:19:48.367 write: IOPS=2004, BW=8020KiB/s (8212kB/s)(8028KiB/1001msec); 0 zone resets 00:19:48.367 slat (nsec): min=4641, max=72475, avg=17175.44, stdev=15320.12 00:19:48.367 clat (usec): min=89, max=646, avg=200.32, stdev=77.62 00:19:48.367 lat (usec): min=94, max=699, avg=217.50, stdev=86.60 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 128], 00:19:48.367 | 30.00th=[ 141], 40.00th=[ 161], 50.00th=[ 176], 60.00th=[ 202], 00:19:48.367 | 70.00th=[ 239], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 351], 00:19:48.367 | 99.00th=[ 408], 99.50th=[ 437], 99.90th=[ 545], 99.95th=[ 545], 00:19:48.367 | 99.99th=[ 644] 00:19:48.367 bw ( KiB/s): min= 7416, max= 7416, per=53.64%, avg=7416.00, stdev= 0.00, samples=1 00:19:48.367 iops : min= 1854, max= 1854, avg=1854.00, stdev= 0.00, samples=1 00:19:48.367 lat (usec) : 100=0.06%, 250=52.95%, 500=40.87%, 750=6.04%, 1000=0.08% 00:19:48.367 cpu : usr=3.50%, sys=8.90%, ctx=3546, majf=0, minf=1 00:19:48.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 issued rwts: total=1536,2007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.367 job1: (groupid=0, jobs=1): err= 0: pid=3492811: Wed May 15 00:56:35 2024 00:19:48.367 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:19:48.367 slat (nsec): min=23713, max=45305, avg=40601.00, stdev=5843.55 00:19:48.367 clat (usec): min=651, max=41058, avg=39176.21, stdev=8398.27 00:19:48.367 lat (usec): min=681, max=41081, avg=39216.81, stdev=8400.70 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:48.367 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:48.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:48.367 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:48.367 | 99.99th=[41157] 00:19:48.367 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:48.367 slat (nsec): min=6418, max=95875, avg=21631.59, stdev=14477.83 00:19:48.367 clat (usec): min=119, max=439, avg=212.60, stdev=65.11 00:19:48.367 lat (usec): min=127, max=480, avg=234.23, stdev=74.26 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 151], 00:19:48.367 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 192], 60.00th=[ 235], 00:19:48.367 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 334], 00:19:48.367 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 441], 99.95th=[ 441], 00:19:48.367 | 99.99th=[ 441] 00:19:48.367 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.367 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.367 lat (usec) : 250=64.49%, 500=31.21%, 750=0.19% 00:19:48.367 lat (msec) : 50=4.11% 00:19:48.367 cpu : usr=0.68%, sys=1.46%, ctx=536, majf=0, minf=1 00:19:48.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.367 job2: (groupid=0, jobs=1): err= 0: pid=3492812: Wed May 15 00:56:35 2024 00:19:48.367 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:19:48.367 slat (nsec): min=9611, max=40426, avg=36316.68, stdev=7439.70 00:19:48.367 clat (usec): min=40627, max=41067, avg=40937.01, stdev=94.00 00:19:48.367 lat (usec): min=40636, max=41093, avg=40973.33, stdev=97.60 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:48.367 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:48.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:48.367 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:48.367 | 99.99th=[41157] 00:19:48.367 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:48.367 slat (nsec): min=5851, max=90746, avg=12281.35, stdev=10785.59 00:19:48.367 clat (usec): min=111, max=573, avg=199.86, stdev=56.96 00:19:48.367 lat (usec): min=119, max=664, avg=212.14, stdev=63.43 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 123], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 147], 00:19:48.367 | 30.00th=[ 157], 40.00th=[ 184], 50.00th=[ 200], 60.00th=[ 210], 00:19:48.367 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 273], 95.00th=[ 306], 00:19:48.367 | 99.00th=[ 359], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 578], 00:19:48.367 | 99.99th=[ 578] 00:19:48.367 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.367 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.367 lat (usec) : 250=81.84%, 500=13.86%, 750=0.19% 00:19:48.367 lat (msec) : 50=4.12% 00:19:48.367 cpu : usr=0.20%, sys=0.59%, ctx=535, majf=0, minf=1 00:19:48.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.367 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.367 job3: (groupid=0, jobs=1): err= 0: pid=3492813: Wed May 15 00:56:35 2024 00:19:48.367 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:19:48.367 slat (nsec): min=28597, max=45330, avg=40441.10, stdev=4482.34 00:19:48.367 clat (usec): min=673, max=42022, avg=39967.61, stdev=9003.48 00:19:48.367 lat (usec): min=702, max=42056, avg=40008.06, stdev=9006.20 00:19:48.367 clat percentiles (usec): 00:19:48.367 | 1.00th=[ 676], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:48.367 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:48.367 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:48.367 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:48.368 | 99.99th=[42206] 00:19:48.368 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:19:48.368 slat (nsec): min=6459, max=89575, avg=26833.60, stdev=14241.21 00:19:48.368 clat (usec): min=138, max=572, avg=305.45, stdev=100.38 00:19:48.368 lat (usec): min=146, max=606, avg=332.29, stdev=107.83 00:19:48.368 clat percentiles (usec): 00:19:48.368 | 1.00th=[ 147], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 215], 00:19:48.368 | 30.00th=[ 243], 40.00th=[ 262], 50.00th=[ 285], 60.00th=[ 318], 00:19:48.368 | 70.00th=[ 367], 80.00th=[ 400], 90.00th=[ 457], 95.00th=[ 494], 00:19:48.368 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 570], 99.95th=[ 570], 00:19:48.368 | 99.99th=[ 570] 00:19:48.368 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.368 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.368 lat (usec) : 250=32.65%, 500=59.66%, 750=3.94% 00:19:48.368 lat (msec) : 50=3.75% 00:19:48.368 cpu : usr=0.99%, sys=1.68%, ctx=534, majf=0, minf=1 00:19:48.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.368 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.368 00:19:48.368 Run status group 0 (all jobs): 00:19:48.368 READ: bw=6252KiB/s (6402kB/s), 82.9KiB/s-6138KiB/s (84.9kB/s-6285kB/s), io=6408KiB (6562kB), run=1001-1025msec 00:19:48.368 WRITE: bw=13.5MiB/s (14.2MB/s), 1998KiB/s-8020KiB/s (2046kB/s-8212kB/s), io=13.8MiB (14.5MB), run=1001-1025msec 00:19:48.368 00:19:48.368 Disk stats (read/write): 00:19:48.368 nvme0n1: ios=1148/1536, merge=0/0, ticks=1324/298, in_queue=1622, util=95.79% 00:19:48.368 nvme0n2: ios=18/512, merge=0/0, ticks=697/87, in_queue=784, util=84.76% 00:19:48.368 nvme0n3: ios=17/512, merge=0/0, ticks=697/95, in_queue=792, util=88.38% 00:19:48.368 nvme0n4: ios=16/512, merge=0/0, ticks=631/116, in_queue=747, util=89.33% 00:19:48.368 00:56:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:48.368 [global] 00:19:48.368 thread=1 00:19:48.368 invalidate=1 00:19:48.368 rw=randwrite 00:19:48.368 time_based=1 00:19:48.368 runtime=1 00:19:48.368 ioengine=libaio 00:19:48.368 direct=1 00:19:48.368 bs=4096 00:19:48.368 iodepth=1 00:19:48.368 norandommap=0 00:19:48.368 numjobs=1 00:19:48.368 00:19:48.368 verify_dump=1 00:19:48.368 verify_backlog=512 00:19:48.368 verify_state_save=0 00:19:48.368 do_verify=1 00:19:48.368 verify=crc32c-intel 00:19:48.368 [job0] 00:19:48.368 filename=/dev/nvme0n1 00:19:48.368 [job1] 00:19:48.368 filename=/dev/nvme0n2 00:19:48.368 [job2] 00:19:48.368 filename=/dev/nvme0n3 00:19:48.368 [job3] 00:19:48.368 filename=/dev/nvme0n4 00:19:48.368 Could not set queue depth (nvme0n1) 00:19:48.368 Could not set queue depth (nvme0n2) 00:19:48.368 Could not set queue depth (nvme0n3) 00:19:48.368 Could not set queue depth (nvme0n4) 00:19:48.626 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.626 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.626 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.626 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.626 fio-3.35 00:19:48.626 Starting 4 threads 00:19:50.001 00:19:50.001 job0: (groupid=0, jobs=1): err= 0: pid=3493284: Wed May 15 00:56:36 2024 00:19:50.001 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:19:50.001 slat (nsec): min=18080, max=50560, avg=36077.05, stdev=9671.16 00:19:50.001 clat (usec): min=40776, max=43879, avg=41188.35, stdev=684.22 00:19:50.001 lat (usec): min=40811, max=43929, avg=41224.43, stdev=688.38 00:19:50.001 clat percentiles (usec): 00:19:50.001 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:50.001 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:50.001 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:50.001 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:19:50.001 | 99.99th=[43779] 00:19:50.001 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:19:50.001 slat (nsec): min=4562, max=50661, avg=14917.99, stdev=7843.08 00:19:50.001 clat (usec): min=123, max=542, avg=253.25, stdev=53.27 00:19:50.001 lat (usec): min=130, max=593, avg=268.16, stdev=54.51 00:19:50.001 clat percentiles (usec): 00:19:50.001 | 1.00th=[ 129], 5.00th=[ 155], 10.00th=[ 184], 20.00th=[ 217], 00:19:50.001 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 255], 60.00th=[ 265], 00:19:50.001 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 347], 00:19:50.001 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 545], 99.95th=[ 545], 00:19:50.001 | 99.99th=[ 545] 00:19:50.001 bw ( KiB/s): min= 4096, max= 4096, per=16.75%, avg=4096.00, stdev= 0.00, samples=1 00:19:50.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:50.001 lat (usec) : 250=43.71%, 500=52.16%, 750=0.19% 00:19:50.001 lat (msec) : 50=3.94% 00:19:50.001 cpu : usr=0.00%, sys=1.59%, ctx=533, majf=0, minf=1 00:19:50.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.001 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.001 job1: (groupid=0, jobs=1): err= 0: pid=3493285: Wed May 15 00:56:36 2024 00:19:50.001 read: IOPS=1848, BW=7393KiB/s (7570kB/s)(7400KiB/1001msec) 00:19:50.001 slat (nsec): min=3608, max=54924, avg=15002.53, stdev=10854.01 00:19:50.001 clat (usec): min=195, max=604, avg=303.49, stdev=69.07 00:19:50.001 lat (usec): min=201, max=626, avg=318.49, stdev=75.95 00:19:50.001 clat percentiles (usec): 00:19:50.001 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 239], 00:19:50.002 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 318], 00:19:50.002 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 408], 00:19:50.002 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 594], 99.95th=[ 603], 00:19:50.002 | 99.99th=[ 603] 00:19:50.002 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:50.002 slat (nsec): min=4891, max=55770, avg=11086.94, stdev=6710.41 00:19:50.002 clat (usec): min=114, max=472, avg=183.40, stdev=57.66 00:19:50.002 lat (usec): min=121, max=528, avg=194.49, stdev=61.27 00:19:50.002 clat percentiles (usec): 00:19:50.002 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:19:50.002 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 159], 60.00th=[ 174], 00:19:50.002 | 70.00th=[ 194], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 302], 00:19:50.002 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 400], 99.95th=[ 424], 00:19:50.002 | 99.99th=[ 474] 00:19:50.002 bw ( KiB/s): min= 8192, max= 8192, per=33.50%, avg=8192.00, stdev= 0.00, samples=1 00:19:50.002 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:50.002 lat (usec) : 250=57.54%, 500=42.00%, 750=0.46% 00:19:50.002 cpu : usr=3.10%, sys=6.90%, ctx=3899, majf=0, minf=1 00:19:50.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 issued rwts: total=1850,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.002 job2: (groupid=0, jobs=1): err= 0: pid=3493286: Wed May 15 00:56:36 2024 00:19:50.002 read: IOPS=1493, BW=5974KiB/s (6117kB/s)(5980KiB/1001msec) 00:19:50.002 slat (nsec): min=3862, max=59201, avg=18998.53, stdev=12077.47 00:19:50.002 clat (usec): min=212, max=876, avg=451.60, stdev=161.28 00:19:50.002 lat (usec): min=220, max=903, avg=470.60, stdev=166.47 00:19:50.002 clat percentiles (usec): 00:19:50.002 | 1.00th=[ 233], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 289], 00:19:50.002 | 30.00th=[ 326], 40.00th=[ 375], 50.00th=[ 429], 60.00th=[ 478], 00:19:50.002 | 70.00th=[ 523], 80.00th=[ 603], 90.00th=[ 709], 95.00th=[ 750], 00:19:50.002 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 873], 99.95th=[ 881], 00:19:50.002 | 99.99th=[ 881] 00:19:50.002 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:50.002 slat (nsec): min=4687, max=56750, avg=8842.06, stdev=5181.69 00:19:50.002 clat (usec): min=104, max=585, avg=177.51, stdev=46.80 00:19:50.002 lat (usec): min=110, max=630, avg=186.35, stdev=49.76 00:19:50.002 clat percentiles (usec): 00:19:50.002 | 1.00th=[ 115], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 141], 00:19:50.002 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 180], 00:19:50.002 | 70.00th=[ 194], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 247], 00:19:50.002 | 99.00th=[ 285], 99.50th=[ 404], 99.90th=[ 553], 99.95th=[ 586], 00:19:50.002 | 99.99th=[ 586] 00:19:50.002 bw ( KiB/s): min= 8192, max= 8192, per=33.50%, avg=8192.00, stdev= 0.00, samples=1 00:19:50.002 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:50.002 lat (usec) : 250=50.87%, 500=31.74%, 750=14.65%, 1000=2.74% 00:19:50.002 cpu : usr=2.30%, sys=6.10%, ctx=3034, majf=0, minf=1 00:19:50.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 issued rwts: total=1495,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.002 job3: (groupid=0, jobs=1): err= 0: pid=3493287: Wed May 15 00:56:36 2024 00:19:50.002 read: IOPS=1892, BW=7568KiB/s (7750kB/s)(7576KiB/1001msec) 00:19:50.002 slat (nsec): min=3649, max=46628, avg=10019.04, stdev=8160.78 00:19:50.002 clat (usec): min=153, max=652, avg=298.81, stdev=77.66 00:19:50.002 lat (usec): min=159, max=659, avg=308.83, stdev=82.56 00:19:50.002 clat percentiles (usec): 00:19:50.002 | 1.00th=[ 190], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 241], 00:19:50.002 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:19:50.002 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 416], 95.00th=[ 474], 00:19:50.002 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 652], 00:19:50.002 | 99.99th=[ 652] 00:19:50.002 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:50.002 slat (nsec): min=4772, max=45250, avg=9155.52, stdev=6130.74 00:19:50.002 clat (usec): min=107, max=1215, avg=187.98, stdev=54.19 00:19:50.002 lat (usec): min=114, max=1221, avg=197.14, stdev=55.70 00:19:50.002 clat percentiles (usec): 00:19:50.002 | 1.00th=[ 117], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 143], 00:19:50.002 | 30.00th=[ 151], 40.00th=[ 163], 50.00th=[ 184], 60.00th=[ 194], 00:19:50.002 | 70.00th=[ 208], 80.00th=[ 233], 90.00th=[ 262], 95.00th=[ 277], 00:19:50.002 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 412], 00:19:50.002 | 99.99th=[ 1221] 00:19:50.002 bw ( KiB/s): min= 8192, max= 8192, per=33.50%, avg=8192.00, stdev= 0.00, samples=1 00:19:50.002 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:50.002 lat (usec) : 250=57.18%, 500=41.22%, 750=1.57% 00:19:50.002 lat (msec) : 2=0.03% 00:19:50.002 cpu : usr=2.40%, sys=3.40%, ctx=3944, majf=0, minf=1 00:19:50.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.002 issued rwts: total=1894,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.002 00:19:50.002 Run status group 0 (all jobs): 00:19:50.002 READ: bw=20.4MiB/s (21.4MB/s), 83.6KiB/s-7568KiB/s (85.6kB/s-7750kB/s), io=20.5MiB (21.5MB), run=1001-1005msec 00:19:50.002 WRITE: bw=23.9MiB/s (25.0MB/s), 2038KiB/s-8184KiB/s (2087kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1005msec 00:19:50.002 00:19:50.002 Disk stats (read/write): 00:19:50.002 nvme0n1: ios=67/512, merge=0/0, ticks=727/113, in_queue=840, util=85.77% 00:19:50.002 nvme0n2: ios=1560/1601, merge=0/0, ticks=1392/285, in_queue=1677, util=96.94% 00:19:50.002 nvme0n3: ios=1047/1498, merge=0/0, ticks=1388/248, in_queue=1636, util=96.84% 00:19:50.002 nvme0n4: ios=1573/1741, merge=0/0, ticks=1242/318, in_queue=1560, util=96.17% 00:19:50.002 00:56:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:50.002 [global] 00:19:50.002 thread=1 00:19:50.002 invalidate=1 00:19:50.002 rw=write 00:19:50.002 time_based=1 00:19:50.002 runtime=1 00:19:50.002 ioengine=libaio 00:19:50.002 direct=1 00:19:50.002 bs=4096 00:19:50.002 iodepth=128 00:19:50.002 norandommap=0 00:19:50.002 numjobs=1 00:19:50.002 00:19:50.002 verify_dump=1 00:19:50.002 verify_backlog=512 00:19:50.002 verify_state_save=0 00:19:50.002 do_verify=1 00:19:50.002 verify=crc32c-intel 00:19:50.002 [job0] 00:19:50.002 filename=/dev/nvme0n1 00:19:50.002 [job1] 00:19:50.002 filename=/dev/nvme0n2 00:19:50.002 [job2] 00:19:50.002 filename=/dev/nvme0n3 00:19:50.002 [job3] 00:19:50.002 filename=/dev/nvme0n4 00:19:50.002 Could not set queue depth (nvme0n1) 00:19:50.002 Could not set queue depth (nvme0n2) 00:19:50.002 Could not set queue depth (nvme0n3) 00:19:50.002 Could not set queue depth (nvme0n4) 00:19:50.261 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.261 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.261 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.261 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.261 fio-3.35 00:19:50.261 Starting 4 threads 00:19:51.640 00:19:51.640 job0: (groupid=0, jobs=1): err= 0: pid=3493756: Wed May 15 00:56:38 2024 00:19:51.640 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:19:51.640 slat (nsec): min=881, max=17624k, avg=119400.16, stdev=839222.42 00:19:51.640 clat (usec): min=3222, max=60465, avg=13212.55, stdev=10331.68 00:19:51.640 lat (usec): min=3226, max=60473, avg=13331.95, stdev=10420.06 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7701], 00:19:51.640 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[ 9896], 00:19:51.640 | 70.00th=[11207], 80.00th=[16712], 90.00th=[21365], 95.00th=[40109], 00:19:51.640 | 99.00th=[51643], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:19:51.640 | 99.99th=[60556] 00:19:51.640 write: IOPS=4309, BW=16.8MiB/s (17.7MB/s)(17.1MiB/1015msec); 0 zone resets 00:19:51.640 slat (nsec): min=1679, max=8589.3k, avg=111292.34, stdev=485220.42 00:19:51.640 clat (usec): min=1167, max=60426, avg=17072.99, stdev=11565.61 00:19:51.640 lat (usec): min=1175, max=60429, avg=17184.28, stdev=11634.15 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 2376], 5.00th=[ 4293], 10.00th=[ 6259], 20.00th=[ 7308], 00:19:51.640 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[11600], 60.00th=[20579], 00:19:51.640 | 70.00th=[23725], 80.00th=[29754], 90.00th=[33817], 95.00th=[36439], 00:19:51.640 | 99.00th=[44827], 99.50th=[45876], 99.90th=[52691], 99.95th=[52691], 00:19:51.640 | 99.99th=[60556] 00:19:51.640 bw ( KiB/s): min=13512, max=20423, per=24.61%, avg=16967.50, stdev=4886.81, samples=2 00:19:51.640 iops : min= 3378, max= 5105, avg=4241.50, stdev=1221.17, samples=2 00:19:51.640 lat (msec) : 2=0.25%, 4=2.13%, 10=51.49%, 20=17.76%, 50=27.30% 00:19:51.640 lat (msec) : 100=1.09% 00:19:51.640 cpu : usr=2.27%, sys=4.44%, ctx=569, majf=0, minf=1 00:19:51.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:51.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.640 issued rwts: total=4096,4374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.640 job1: (groupid=0, jobs=1): err= 0: pid=3493757: Wed May 15 00:56:38 2024 00:19:51.640 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:19:51.640 slat (nsec): min=970, max=10746k, avg=114692.66, stdev=790122.31 00:19:51.640 clat (usec): min=5581, max=33838, avg=13538.64, stdev=5221.23 00:19:51.640 lat (usec): min=5584, max=34189, avg=13653.33, stdev=5301.10 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 8356], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 8979], 00:19:51.640 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[14484], 00:19:51.640 | 70.00th=[16581], 80.00th=[19006], 90.00th=[20317], 95.00th=[23200], 00:19:51.640 | 99.00th=[28443], 99.50th=[31589], 99.90th=[33817], 99.95th=[33817], 00:19:51.640 | 99.99th=[33817] 00:19:51.640 write: IOPS=3387, BW=13.2MiB/s (13.9MB/s)(13.4MiB/1015msec); 0 zone resets 00:19:51.640 slat (nsec): min=1861, max=20221k, avg=183988.42, stdev=918643.71 00:19:51.640 clat (usec): min=2501, max=59515, avg=25355.92, stdev=11217.03 00:19:51.640 lat (usec): min=2506, max=63320, avg=25539.91, stdev=11283.37 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[11600], 20.00th=[14877], 00:19:51.640 | 30.00th=[17433], 40.00th=[21627], 50.00th=[23987], 60.00th=[28705], 00:19:51.640 | 70.00th=[32113], 80.00th=[35390], 90.00th=[38536], 95.00th=[44303], 00:19:51.640 | 99.00th=[53740], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:19:51.640 | 99.99th=[59507] 00:19:51.640 bw ( KiB/s): min=13224, max=13264, per=19.21%, avg=13244.00, stdev=28.28, samples=2 00:19:51.640 iops : min= 3306, max= 3316, avg=3311.00, stdev= 7.07, samples=2 00:19:51.640 lat (msec) : 4=0.31%, 10=25.15%, 20=33.52%, 50=39.55%, 100=1.47% 00:19:51.640 cpu : usr=2.66%, sys=3.35%, ctx=379, majf=0, minf=1 00:19:51.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:51.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.640 issued rwts: total=3072,3438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.640 job2: (groupid=0, jobs=1): err= 0: pid=3493758: Wed May 15 00:56:38 2024 00:19:51.640 read: IOPS=6809, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1004msec) 00:19:51.640 slat (nsec): min=841, max=12091k, avg=73510.60, stdev=499571.18 00:19:51.640 clat (usec): min=1988, max=30581, avg=9373.79, stdev=3874.90 00:19:51.640 lat (usec): min=3770, max=30623, avg=9447.30, stdev=3911.01 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7767], 00:19:51.640 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8291], 60.00th=[ 8455], 00:19:51.640 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[11207], 95.00th=[19006], 00:19:51.640 | 99.00th=[26084], 99.50th=[29230], 99.90th=[29492], 99.95th=[30016], 00:19:51.640 | 99.99th=[30540] 00:19:51.640 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:19:51.640 slat (nsec): min=1509, max=14640k, avg=66372.28, stdev=432264.90 00:19:51.640 clat (usec): min=4259, max=30060, avg=8827.36, stdev=2129.82 00:19:51.640 lat (usec): min=4262, max=30094, avg=8893.73, stdev=2175.44 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 5211], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 7832], 00:19:51.640 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:19:51.640 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[14746], 00:19:51.640 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17171], 99.95th=[25822], 00:19:51.640 | 99.99th=[30016] 00:19:51.640 bw ( KiB/s): min=25848, max=31496, per=41.60%, avg=28672.00, stdev=3993.74, samples=2 00:19:51.640 iops : min= 6462, max= 7874, avg=7168.00, stdev=998.43, samples=2 00:19:51.640 lat (msec) : 2=0.01%, 4=0.20%, 10=85.52%, 20=12.67%, 50=1.60% 00:19:51.640 cpu : usr=3.29%, sys=6.18%, ctx=806, majf=0, minf=1 00:19:51.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:51.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.640 issued rwts: total=6837,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.640 job3: (groupid=0, jobs=1): err= 0: pid=3493759: Wed May 15 00:56:38 2024 00:19:51.640 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:19:51.640 slat (nsec): min=1046, max=9783.7k, avg=117204.46, stdev=713463.85 00:19:51.640 clat (usec): min=7551, max=30018, avg=13780.17, stdev=3467.31 00:19:51.640 lat (usec): min=7557, max=30026, avg=13897.38, stdev=3525.87 00:19:51.640 clat percentiles (usec): 00:19:51.640 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:19:51.640 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13173], 00:19:51.640 | 70.00th=[13435], 80.00th=[15139], 90.00th=[16909], 95.00th=[20055], 00:19:51.640 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:19:51.640 | 99.99th=[30016] 00:19:51.640 write: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9.81MiB/1009msec); 0 zone resets 00:19:51.641 slat (nsec): min=1565, max=31834k, avg=298633.22, stdev=1549321.08 00:19:51.641 clat (msec): min=8, max=125, avg=39.10, stdev=22.15 00:19:51.641 lat (msec): min=9, max=125, avg=39.40, stdev=22.30 00:19:51.641 clat percentiles (msec): 00:19:51.641 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 22], 00:19:51.641 | 30.00th=[ 23], 40.00th=[ 29], 50.00th=[ 34], 60.00th=[ 39], 00:19:51.641 | 70.00th=[ 46], 80.00th=[ 55], 90.00th=[ 60], 95.00th=[ 90], 00:19:51.641 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 126], 99.95th=[ 126], 00:19:51.641 | 99.99th=[ 126] 00:19:51.641 bw ( KiB/s): min= 9160, max= 9912, per=13.83%, avg=9536.00, stdev=531.74, samples=2 00:19:51.641 iops : min= 2290, max= 2478, avg=2384.00, stdev=132.94, samples=2 00:19:51.641 lat (msec) : 10=1.51%, 20=47.44%, 50=37.38%, 100=11.95%, 250=1.71% 00:19:51.641 cpu : usr=1.79%, sys=2.38%, ctx=317, majf=0, minf=1 00:19:51.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:51.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.641 issued rwts: total=2048,2511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.641 00:19:51.641 Run status group 0 (all jobs): 00:19:51.641 READ: bw=61.8MiB/s (64.8MB/s), 8119KiB/s-26.6MiB/s (8314kB/s-27.9MB/s), io=62.7MiB (65.8MB), run=1004-1015msec 00:19:51.641 WRITE: bw=67.3MiB/s (70.6MB/s), 9954KiB/s-27.9MiB/s (10.2MB/s-29.2MB/s), io=68.3MiB (71.6MB), run=1004-1015msec 00:19:51.641 00:19:51.641 Disk stats (read/write): 00:19:51.641 nvme0n1: ios=3634/3767, merge=0/0, ticks=45109/57359, in_queue=102468, util=85.77% 00:19:51.641 nvme0n2: ios=2581/2831, merge=0/0, ticks=33857/68804, in_queue=102661, util=90.20% 00:19:51.641 nvme0n3: ios=5632/5777, merge=0/0, ticks=31399/27701, in_queue=59100, util=88.41% 00:19:51.641 nvme0n4: ios=1536/2047, merge=0/0, ticks=11012/40649, in_queue=51661, util=89.45% 00:19:51.641 00:56:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:51.641 [global] 00:19:51.641 thread=1 00:19:51.641 invalidate=1 00:19:51.641 rw=randwrite 00:19:51.641 time_based=1 00:19:51.641 runtime=1 00:19:51.641 ioengine=libaio 00:19:51.641 direct=1 00:19:51.641 bs=4096 00:19:51.641 iodepth=128 00:19:51.641 norandommap=0 00:19:51.641 numjobs=1 00:19:51.641 00:19:51.641 verify_dump=1 00:19:51.641 verify_backlog=512 00:19:51.641 verify_state_save=0 00:19:51.641 do_verify=1 00:19:51.641 verify=crc32c-intel 00:19:51.641 [job0] 00:19:51.641 filename=/dev/nvme0n1 00:19:51.641 [job1] 00:19:51.641 filename=/dev/nvme0n2 00:19:51.641 [job2] 00:19:51.641 filename=/dev/nvme0n3 00:19:51.641 [job3] 00:19:51.641 filename=/dev/nvme0n4 00:19:51.641 Could not set queue depth (nvme0n1) 00:19:51.641 Could not set queue depth (nvme0n2) 00:19:51.641 Could not set queue depth (nvme0n3) 00:19:51.641 Could not set queue depth (nvme0n4) 00:19:51.898 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.898 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.898 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.898 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.898 fio-3.35 00:19:51.898 Starting 4 threads 00:19:53.279 00:19:53.279 job0: (groupid=0, jobs=1): err= 0: pid=3494222: Wed May 15 00:56:39 2024 00:19:53.279 read: IOPS=6370, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1004msec) 00:19:53.279 slat (nsec): min=835, max=4636.2k, avg=77692.79, stdev=460946.82 00:19:53.279 clat (usec): min=987, max=14703, avg=9635.99, stdev=1300.29 00:19:53.279 lat (usec): min=5212, max=14707, avg=9713.69, stdev=1343.70 00:19:53.279 clat percentiles (usec): 00:19:53.279 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 9110], 00:19:53.279 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:19:53.279 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11207], 95.00th=[12125], 00:19:53.279 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14222], 99.95th=[14484], 00:19:53.279 | 99.99th=[14746] 00:19:53.279 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:19:53.279 slat (nsec): min=1420, max=5081.7k, avg=72917.16, stdev=370315.20 00:19:53.279 clat (usec): min=4944, max=15172, avg=9832.60, stdev=1084.70 00:19:53.279 lat (usec): min=5100, max=15181, avg=9905.52, stdev=1119.93 00:19:53.279 clat percentiles (usec): 00:19:53.279 | 1.00th=[ 6390], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9372], 00:19:53.279 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:19:53.279 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[11469], 00:19:53.279 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14746], 99.95th=[14877], 00:19:53.279 | 99.99th=[15139] 00:19:53.279 bw ( KiB/s): min=26568, max=26680, per=35.12%, avg=26624.00, stdev=79.20, samples=2 00:19:53.279 iops : min= 6642, max= 6670, avg=6656.00, stdev=19.80, samples=2 00:19:53.279 lat (usec) : 1000=0.01% 00:19:53.279 lat (msec) : 10=66.98%, 20=33.01% 00:19:53.279 cpu : usr=2.59%, sys=4.59%, ctx=762, majf=0, minf=1 00:19:53.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:53.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.279 issued rwts: total=6396,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.279 job1: (groupid=0, jobs=1): err= 0: pid=3494224: Wed May 15 00:56:39 2024 00:19:53.279 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:19:53.279 slat (nsec): min=961, max=26410k, avg=161700.06, stdev=1255004.39 00:19:53.279 clat (usec): min=7316, max=45838, avg=19526.95, stdev=5410.38 00:19:53.279 lat (usec): min=7320, max=45874, avg=19688.65, stdev=5500.50 00:19:53.279 clat percentiles (usec): 00:19:53.279 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[11076], 20.00th=[16188], 00:19:53.280 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19530], 60.00th=[19792], 00:19:53.280 | 70.00th=[20055], 80.00th=[21890], 90.00th=[27132], 95.00th=[30016], 00:19:53.280 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36963], 99.95th=[39060], 00:19:53.280 | 99.99th=[45876] 00:19:53.280 write: IOPS=3253, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1015msec); 0 zone resets 00:19:53.280 slat (nsec): min=1684, max=15597k, avg=147938.46, stdev=743211.85 00:19:53.280 clat (usec): min=1114, max=45578, avg=20805.14, stdev=4968.01 00:19:53.280 lat (usec): min=1123, max=45609, avg=20953.07, stdev=5025.43 00:19:53.280 clat percentiles (usec): 00:19:53.280 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[17695], 20.00th=[19268], 00:19:53.280 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:19:53.280 | 70.00th=[21103], 80.00th=[21890], 90.00th=[26608], 95.00th=[30540], 00:19:53.280 | 99.00th=[35390], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:19:53.280 | 99.99th=[45351] 00:19:53.280 bw ( KiB/s): min=12680, max=12720, per=16.75%, avg=12700.00, stdev=28.28, samples=2 00:19:53.280 iops : min= 3170, max= 3180, avg=3175.00, stdev= 7.07, samples=2 00:19:53.280 lat (msec) : 2=0.03%, 4=0.09%, 10=4.06%, 20=45.11%, 50=50.71% 00:19:53.280 cpu : usr=1.78%, sys=3.35%, ctx=435, majf=0, minf=1 00:19:53.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.280 issued rwts: total=3072,3302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.280 job2: (groupid=0, jobs=1): err= 0: pid=3494225: Wed May 15 00:56:39 2024 00:19:53.280 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:19:53.280 slat (nsec): min=890, max=10348k, avg=95109.12, stdev=718994.39 00:19:53.280 clat (usec): min=3402, max=21177, avg=11504.17, stdev=2724.54 00:19:53.280 lat (usec): min=3407, max=21181, avg=11599.28, stdev=2775.02 00:19:53.280 clat percentiles (usec): 00:19:53.280 | 1.00th=[ 4686], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10159], 00:19:53.280 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:53.280 | 70.00th=[11207], 80.00th=[12649], 90.00th=[15926], 95.00th=[17957], 00:19:53.280 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:19:53.280 | 99.99th=[21103] 00:19:53.280 write: IOPS=5941, BW=23.2MiB/s (24.3MB/s)(23.4MiB/1010msec); 0 zone resets 00:19:53.280 slat (nsec): min=1663, max=17091k, avg=74113.85, stdev=418268.83 00:19:53.280 clat (usec): min=1276, max=27559, avg=10525.45, stdev=2886.14 00:19:53.280 lat (usec): min=1294, max=28171, avg=10599.56, stdev=2918.44 00:19:53.280 clat percentiles (usec): 00:19:53.280 | 1.00th=[ 3163], 5.00th=[ 5342], 10.00th=[ 7308], 20.00th=[ 9503], 00:19:53.280 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:19:53.280 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12256], 00:19:53.280 | 99.00th=[22938], 99.50th=[25560], 99.90th=[27657], 99.95th=[27657], 00:19:53.280 | 99.99th=[27657] 00:19:53.280 bw ( KiB/s): min=22432, max=24560, per=30.99%, avg=23496.00, stdev=1504.72, samples=2 00:19:53.280 iops : min= 5608, max= 6140, avg=5874.00, stdev=376.18, samples=2 00:19:53.280 lat (msec) : 2=0.02%, 4=1.56%, 10=18.80%, 20=78.26%, 50=1.37% 00:19:53.280 cpu : usr=2.68%, sys=4.66%, ctx=727, majf=0, minf=1 00:19:53.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.280 issued rwts: total=5632,6001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.280 job3: (groupid=0, jobs=1): err= 0: pid=3494226: Wed May 15 00:56:39 2024 00:19:53.280 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:19:53.280 slat (nsec): min=1089, max=19474k, avg=181801.01, stdev=1273387.03 00:19:53.280 clat (usec): min=5566, max=52872, avg=21250.38, stdev=10454.03 00:19:53.280 lat (usec): min=5572, max=52881, avg=21432.18, stdev=10522.35 00:19:53.280 clat percentiles (usec): 00:19:53.280 | 1.00th=[ 6063], 5.00th=[10552], 10.00th=[11338], 20.00th=[13042], 00:19:53.280 | 30.00th=[14746], 40.00th=[16909], 50.00th=[19530], 60.00th=[19792], 00:19:53.280 | 70.00th=[21103], 80.00th=[26870], 90.00th=[40109], 95.00th=[44303], 00:19:53.280 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:19:53.280 | 99.99th=[52691] 00:19:53.280 write: IOPS=3229, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1015msec); 0 zone resets 00:19:53.280 slat (nsec): min=1888, max=26188k, avg=130588.51, stdev=770721.26 00:19:53.280 clat (usec): min=2892, max=52845, avg=19339.85, stdev=5485.70 00:19:53.280 lat (usec): min=2899, max=52852, avg=19470.44, stdev=5538.77 00:19:53.280 clat percentiles (usec): 00:19:53.280 | 1.00th=[ 3982], 5.00th=[ 7242], 10.00th=[11207], 20.00th=[18482], 00:19:53.280 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20055], 60.00th=[20579], 00:19:53.280 | 70.00th=[20841], 80.00th=[21103], 90.00th=[24249], 95.00th=[28443], 00:19:53.280 | 99.00th=[34866], 99.50th=[35914], 99.90th=[51643], 99.95th=[52691], 00:19:53.280 | 99.99th=[52691] 00:19:53.280 bw ( KiB/s): min=12464, max=12736, per=16.62%, avg=12600.00, stdev=192.33, samples=2 00:19:53.280 iops : min= 3116, max= 3184, avg=3150.00, stdev=48.08, samples=2 00:19:53.280 lat (msec) : 4=0.58%, 10=5.62%, 20=46.68%, 50=46.30%, 100=0.82% 00:19:53.280 cpu : usr=1.38%, sys=3.65%, ctx=407, majf=0, minf=1 00:19:53.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.280 issued rwts: total=3072,3278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.280 00:19:53.280 Run status group 0 (all jobs): 00:19:53.280 READ: bw=69.9MiB/s (73.3MB/s), 11.8MiB/s-24.9MiB/s (12.4MB/s-26.1MB/s), io=71.0MiB (74.4MB), run=1004-1015msec 00:19:53.280 WRITE: bw=74.0MiB/s (77.6MB/s), 12.6MiB/s-25.9MiB/s (13.2MB/s-27.2MB/s), io=75.1MiB (78.8MB), run=1004-1015msec 00:19:53.280 00:19:53.280 Disk stats (read/write): 00:19:53.280 nvme0n1: ios=5296/5632, merge=0/0, ticks=25589/26129, in_queue=51718, util=85.67% 00:19:53.280 nvme0n2: ios=2585/2687, merge=0/0, ticks=49430/54351, in_queue=103781, util=92.46% 00:19:53.280 nvme0n3: ios=4640/5007, merge=0/0, ticks=52611/50163, in_queue=102774, util=98.00% 00:19:53.280 nvme0n4: ios=2609/2655, merge=0/0, ticks=53200/49974, in_queue=103174, util=97.76% 00:19:53.280 00:56:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:53.280 00:56:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3494519 00:19:53.280 00:56:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:53.280 00:56:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:53.280 [global] 00:19:53.280 thread=1 00:19:53.280 invalidate=1 00:19:53.280 rw=read 00:19:53.280 time_based=1 00:19:53.280 runtime=10 00:19:53.280 ioengine=libaio 00:19:53.280 direct=1 00:19:53.280 bs=4096 00:19:53.280 iodepth=1 00:19:53.280 norandommap=1 00:19:53.280 numjobs=1 00:19:53.280 00:19:53.280 [job0] 00:19:53.280 filename=/dev/nvme0n1 00:19:53.280 [job1] 00:19:53.280 filename=/dev/nvme0n2 00:19:53.280 [job2] 00:19:53.280 filename=/dev/nvme0n3 00:19:53.280 [job3] 00:19:53.280 filename=/dev/nvme0n4 00:19:53.280 Could not set queue depth (nvme0n1) 00:19:53.280 Could not set queue depth (nvme0n2) 00:19:53.280 Could not set queue depth (nvme0n3) 00:19:53.280 Could not set queue depth (nvme0n4) 00:19:53.539 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.539 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.539 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.539 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.539 fio-3.35 00:19:53.539 Starting 4 threads 00:19:56.065 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:56.324 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:56.324 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39268352, buflen=4096 00:19:56.324 fio: pid=3494696, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:56.324 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.324 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:56.324 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=294912, buflen=4096 00:19:56.324 fio: pid=3494695, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:56.583 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.583 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:56.583 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5824512, buflen=4096 00:19:56.583 fio: pid=3494693, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:56.583 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=58470400, buflen=4096 00:19:56.583 fio: pid=3494694, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:56.583 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.583 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:56.583 00:19:56.583 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3494693: Wed May 15 00:56:43 2024 00:19:56.583 read: IOPS=491, BW=1964KiB/s (2011kB/s)(5688KiB/2896msec) 00:19:56.583 slat (usec): min=3, max=5547, avg=12.14, stdev=147.07 00:19:56.583 clat (usec): min=139, max=42287, avg=2022.27, stdev=8396.09 00:19:56.583 lat (usec): min=143, max=46859, avg=2034.40, stdev=8419.78 00:19:56.583 clat percentiles (usec): 00:19:56.583 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:19:56.583 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 229], 60.00th=[ 245], 00:19:56.583 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 371], 00:19:56.583 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:56.583 | 99.99th=[42206] 00:19:56.583 bw ( KiB/s): min= 256, max= 4128, per=4.50%, avg=1516.80, stdev=1661.06, samples=5 00:19:56.583 iops : min= 64, max= 1032, avg=379.20, stdev=415.27, samples=5 00:19:56.583 lat (usec) : 250=66.06%, 500=29.37%, 750=0.14% 00:19:56.583 lat (msec) : 50=4.36% 00:19:56.583 cpu : usr=0.07%, sys=0.55%, ctx=1424, majf=0, minf=1 00:19:56.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 issued rwts: total=1423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.583 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3494694: Wed May 15 00:56:43 2024 00:19:56.583 read: IOPS=4741, BW=18.5MiB/s (19.4MB/s)(55.8MiB/3011msec) 00:19:56.583 slat (usec): min=2, max=15371, avg= 7.51, stdev=143.58 00:19:56.583 clat (usec): min=132, max=42032, avg=202.72, stdev=390.34 00:19:56.583 lat (usec): min=136, max=42036, avg=210.23, stdev=416.15 00:19:56.583 clat percentiles (usec): 00:19:56.583 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:19:56.583 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:19:56.583 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 262], 00:19:56.583 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 408], 99.95th=[ 429], 00:19:56.583 | 99.99th=[20579] 00:19:56.583 bw ( KiB/s): min=16688, max=20648, per=57.66%, avg=19424.00, stdev=1675.20, samples=5 00:19:56.583 iops : min= 4172, max= 5162, avg=4856.00, stdev=418.80, samples=5 00:19:56.583 lat (usec) : 250=93.77%, 500=6.19%, 750=0.02% 00:19:56.583 lat (msec) : 50=0.01% 00:19:56.583 cpu : usr=0.70%, sys=3.22%, ctx=14280, majf=0, minf=1 00:19:56.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 issued rwts: total=14276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.583 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3494695: Wed May 15 00:56:43 2024 00:19:56.583 read: IOPS=26, BW=105KiB/s (107kB/s)(288KiB/2754msec) 00:19:56.583 slat (nsec): min=6598, max=45156, avg=14967.95, stdev=10039.58 00:19:56.583 clat (usec): min=293, max=41288, avg=38222.48, stdev=10160.03 00:19:56.583 lat (usec): min=302, max=41296, avg=38237.24, stdev=10157.35 00:19:56.583 clat percentiles (usec): 00:19:56.583 | 1.00th=[ 293], 5.00th=[ 586], 10.00th=[40633], 20.00th=[41157], 00:19:56.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:56.583 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:56.583 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:56.583 | 99.99th=[41157] 00:19:56.583 bw ( KiB/s): min= 96, max= 136, per=0.31%, avg=104.00, stdev=17.89, samples=5 00:19:56.583 iops : min= 24, max= 34, avg=26.00, stdev= 4.47, samples=5 00:19:56.583 lat (usec) : 500=2.74%, 750=2.74% 00:19:56.583 lat (msec) : 10=1.37%, 50=91.78% 00:19:56.583 cpu : usr=0.00%, sys=0.07%, ctx=75, majf=0, minf=1 00:19:56.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.583 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3494696: Wed May 15 00:56:43 2024 00:19:56.583 read: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(37.4MiB/2614msec) 00:19:56.583 slat (nsec): min=3152, max=59567, avg=10210.23, stdev=9002.84 00:19:56.583 clat (usec): min=154, max=41254, avg=260.67, stdev=721.29 00:19:56.583 lat (usec): min=162, max=41260, avg=270.88, stdev=721.63 00:19:56.583 clat percentiles (usec): 00:19:56.583 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:19:56.583 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:19:56.583 | 70.00th=[ 251], 80.00th=[ 277], 90.00th=[ 322], 95.00th=[ 343], 00:19:56.583 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 494], 99.95th=[ 578], 00:19:56.583 | 99.99th=[41157] 00:19:56.583 bw ( KiB/s): min=13544, max=17040, per=45.52%, avg=15334.40, stdev=1549.35, samples=5 00:19:56.583 iops : min= 3386, max= 4260, avg=3833.60, stdev=387.34, samples=5 00:19:56.583 lat (usec) : 250=68.80%, 500=31.09%, 750=0.06% 00:19:56.583 lat (msec) : 50=0.03% 00:19:56.583 cpu : usr=1.53%, sys=6.35%, ctx=9590, majf=0, minf=2 00:19:56.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.583 issued rwts: total=9588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.583 00:19:56.583 Run status group 0 (all jobs): 00:19:56.583 READ: bw=32.9MiB/s (34.5MB/s), 105KiB/s-18.5MiB/s (107kB/s-19.4MB/s), io=99.0MiB (104MB), run=2614-3011msec 00:19:56.583 00:19:56.583 Disk stats (read/write): 00:19:56.583 nvme0n1: ios=1421/0, merge=0/0, ticks=2831/0, in_queue=2831, util=95.43% 00:19:56.583 nvme0n2: ios=13555/0, merge=0/0, ticks=2720/0, in_queue=2720, util=95.24% 00:19:56.583 nvme0n3: ios=106/0, merge=0/0, ticks=3427/0, in_queue=3427, util=100.00% 00:19:56.583 nvme0n4: ios=9586/0, merge=0/0, ticks=2248/0, in_queue=2248, util=96.49% 00:19:56.842 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.842 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:57.101 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.101 00:56:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:57.101 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.101 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3494519 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:57.360 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:57.926 nvmf hotplug test: fio failed as expected 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.926 00:56:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.185 rmmod nvme_tcp 00:19:58.185 rmmod nvme_fabrics 00:19:58.185 rmmod nvme_keyring 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3491103 ']' 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3491103 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3491103 ']' 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3491103 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3491103 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3491103' 00:19:58.185 killing process with pid 3491103 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3491103 00:19:58.185 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3491103 00:19:58.185 [2024-05-15 00:56:45.077035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.751 00:56:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.654 00:56:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.654 00:20:00.654 real 0m27.318s 00:20:00.654 user 2m27.149s 00:20:00.654 sys 0m8.234s 00:20:00.654 00:56:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:00.654 00:56:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.654 ************************************ 00:20:00.654 END TEST nvmf_fio_target 00:20:00.654 ************************************ 00:20:00.654 00:56:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:00.654 00:56:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:00.654 00:56:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:00.654 00:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.654 ************************************ 00:20:00.654 START TEST nvmf_bdevio 00:20:00.654 ************************************ 00:20:00.654 00:56:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:00.913 * Looking for test storage... 00:20:00.913 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.913 00:56:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.914 00:56:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:06.183 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:06.183 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:06.183 Found net devices under 0000:27:00.0: cvl_0_0 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:06.183 Found net devices under 0000:27:00.1: cvl_0_1 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.183 00:56:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:20:06.183 00:20:06.183 --- 10.0.0.2 ping statistics --- 00:20:06.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.183 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:20:06.183 00:20:06.183 --- 10.0.0.1 ping statistics --- 00:20:06.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.183 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3499618 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3499618 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3499618 ']' 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.183 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:06.184 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:06.184 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:06.184 [2024-05-15 00:56:53.243218] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:06.184 [2024-05-15 00:56:53.243329] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.443 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.443 [2024-05-15 00:56:53.366828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.443 [2024-05-15 00:56:53.466324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.443 [2024-05-15 00:56:53.466365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.443 [2024-05-15 00:56:53.466375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.443 [2024-05-15 00:56:53.466385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.443 [2024-05-15 00:56:53.466393] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.443 [2024-05-15 00:56:53.466592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:06.443 [2024-05-15 00:56:53.466735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.443 [2024-05-15 00:56:53.466720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:06.443 [2024-05-15 00:56:53.466766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 [2024-05-15 00:56:53.982282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.007 00:56:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 Malloc0 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.007 [2024-05-15 00:56:54.045889] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:07.007 [2024-05-15 00:56:54.046223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.007 { 00:20:07.007 "params": { 00:20:07.007 "name": "Nvme$subsystem", 00:20:07.007 "trtype": "$TEST_TRANSPORT", 00:20:07.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.007 "adrfam": "ipv4", 00:20:07.007 "trsvcid": "$NVMF_PORT", 00:20:07.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.007 "hdgst": ${hdgst:-false}, 00:20:07.007 "ddgst": ${ddgst:-false} 00:20:07.007 }, 00:20:07.007 "method": "bdev_nvme_attach_controller" 00:20:07.007 } 00:20:07.007 EOF 00:20:07.007 )") 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:07.007 00:56:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:07.007 "params": { 00:20:07.007 "name": "Nvme1", 00:20:07.007 "trtype": "tcp", 00:20:07.007 "traddr": "10.0.0.2", 00:20:07.007 "adrfam": "ipv4", 00:20:07.007 "trsvcid": "4420", 00:20:07.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.007 "hdgst": false, 00:20:07.007 "ddgst": false 00:20:07.007 }, 00:20:07.007 "method": "bdev_nvme_attach_controller" 00:20:07.007 }' 00:20:07.266 [2024-05-15 00:56:54.131958] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:07.266 [2024-05-15 00:56:54.132103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499798 ] 00:20:07.266 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.266 [2024-05-15 00:56:54.263391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.524 [2024-05-15 00:56:54.357485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.524 [2024-05-15 00:56:54.357576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.524 [2024-05-15 00:56:54.357580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.781 I/O targets: 00:20:07.781 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:07.781 00:20:07.781 00:20:07.781 CUnit - A unit testing framework for C - Version 2.1-3 00:20:07.781 http://cunit.sourceforge.net/ 00:20:07.781 00:20:07.781 00:20:07.781 Suite: bdevio tests on: Nvme1n1 00:20:07.781 Test: blockdev write read block ...passed 00:20:07.781 Test: blockdev write zeroes read block ...passed 00:20:07.781 Test: blockdev write zeroes read no split ...passed 00:20:07.781 Test: blockdev write zeroes read split ...passed 00:20:07.781 Test: blockdev write zeroes read split partial ...passed 00:20:07.781 Test: blockdev reset ...[2024-05-15 00:56:54.805917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.781 [2024-05-15 00:56:54.806032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a4380 (9): Bad file descriptor 00:20:08.039 [2024-05-15 00:56:54.957345] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:08.039 passed 00:20:08.039 Test: blockdev write read 8 blocks ...passed 00:20:08.039 Test: blockdev write read size > 128k ...passed 00:20:08.039 Test: blockdev write read invalid size ...passed 00:20:08.039 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:08.039 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:08.039 Test: blockdev write read max offset ...passed 00:20:08.297 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:08.297 Test: blockdev writev readv 8 blocks ...passed 00:20:08.297 Test: blockdev writev readv 30 x 1block ...passed 00:20:08.297 Test: blockdev writev readv block ...passed 00:20:08.297 Test: blockdev writev readv size > 128k ...passed 00:20:08.297 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:08.297 Test: blockdev comparev and writev ...[2024-05-15 00:56:55.298512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.298552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.298569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.298579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.298885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.298894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.298907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.298915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.299169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.299193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.299201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.299464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.299473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:08.297 [2024-05-15 00:56:55.299488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.297 [2024-05-15 00:56:55.299496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:08.297 passed 00:20:08.572 Test: blockdev nvme passthru rw ...passed 00:20:08.572 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:56:55.383452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.572 [2024-05-15 00:56:55.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:08.572 [2024-05-15 00:56:55.383597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.572 [2024-05-15 00:56:55.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:08.572 [2024-05-15 00:56:55.383722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.572 [2024-05-15 00:56:55.383731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:08.572 [2024-05-15 00:56:55.383850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.572 [2024-05-15 00:56:55.383858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:08.572 passed 00:20:08.572 Test: blockdev nvme admin passthru ...passed 00:20:08.572 Test: blockdev copy ...passed 00:20:08.572 00:20:08.572 Run Summary: Type Total Ran Passed Failed Inactive 00:20:08.572 suites 1 1 n/a 0 0 00:20:08.572 tests 23 23 23 0 0 00:20:08.572 asserts 152 152 152 0 n/a 00:20:08.572 00:20:08.572 Elapsed time = 1.667 seconds 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.899 rmmod nvme_tcp 00:20:08.899 rmmod nvme_fabrics 00:20:08.899 rmmod nvme_keyring 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3499618 ']' 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3499618 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3499618 ']' 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3499618 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3499618 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3499618' 00:20:08.899 killing process with pid 3499618 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3499618 00:20:08.899 [2024-05-15 00:56:55.956151] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:08.899 00:56:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3499618 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.467 00:56:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.004 00:56:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.004 00:20:12.004 real 0m10.875s 00:20:12.004 user 0m16.243s 00:20:12.004 sys 0m4.674s 00:20:12.004 00:56:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.004 00:56:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:12.004 ************************************ 00:20:12.004 END TEST nvmf_bdevio 00:20:12.004 ************************************ 00:20:12.004 00:56:58 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:20:12.005 00:56:58 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.005 00:56:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:12.005 00:56:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:12.005 00:56:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.005 ************************************ 00:20:12.005 START TEST nvmf_bdevio_no_huge 00:20:12.005 ************************************ 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.005 * Looking for test storage... 00:20:12.005 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.005 00:56:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:17.273 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:17.273 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:17.273 Found net devices under 0000:27:00.0: cvl_0_0 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:17.273 Found net devices under 0000:27:00.1: cvl_0_1 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.273 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:20:17.532 00:20:17.532 --- 10.0.0.2 ping statistics --- 00:20:17.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.532 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:20:17.532 00:20:17.532 --- 10.0.0.1 ping statistics --- 00:20:17.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.532 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.532 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3504306 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3504306 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3504306 ']' 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.533 00:57:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:17.533 [2024-05-15 00:57:04.477380] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:17.533 [2024-05-15 00:57:04.477518] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:17.792 [2024-05-15 00:57:04.639269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.792 [2024-05-15 00:57:04.764478] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.792 [2024-05-15 00:57:04.764528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.792 [2024-05-15 00:57:04.764539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.792 [2024-05-15 00:57:04.764550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.792 [2024-05-15 00:57:04.764559] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.792 [2024-05-15 00:57:04.764694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.792 [2024-05-15 00:57:04.764753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:17.792 [2024-05-15 00:57:04.764864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.792 [2024-05-15 00:57:04.764894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 [2024-05-15 00:57:05.243290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 Malloc0 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.359 [2024-05-15 00:57:05.301033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:18.359 [2024-05-15 00:57:05.301425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.359 { 00:20:18.359 "params": { 00:20:18.359 "name": "Nvme$subsystem", 00:20:18.359 "trtype": "$TEST_TRANSPORT", 00:20:18.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.359 "adrfam": "ipv4", 00:20:18.359 "trsvcid": "$NVMF_PORT", 00:20:18.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.359 "hdgst": ${hdgst:-false}, 00:20:18.359 "ddgst": ${ddgst:-false} 00:20:18.359 }, 00:20:18.359 "method": "bdev_nvme_attach_controller" 00:20:18.359 } 00:20:18.359 EOF 00:20:18.359 )") 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:18.359 00:57:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.359 "params": { 00:20:18.359 "name": "Nvme1", 00:20:18.359 "trtype": "tcp", 00:20:18.359 "traddr": "10.0.0.2", 00:20:18.359 "adrfam": "ipv4", 00:20:18.359 "trsvcid": "4420", 00:20:18.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.359 "hdgst": false, 00:20:18.359 "ddgst": false 00:20:18.359 }, 00:20:18.359 "method": "bdev_nvme_attach_controller" 00:20:18.359 }' 00:20:18.359 [2024-05-15 00:57:05.386907] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:18.359 [2024-05-15 00:57:05.387053] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3504620 ] 00:20:18.617 [2024-05-15 00:57:05.543837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:18.617 [2024-05-15 00:57:05.669111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.617 [2024-05-15 00:57:05.669216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.617 [2024-05-15 00:57:05.669221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.875 I/O targets: 00:20:18.875 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:18.875 00:20:18.875 00:20:18.875 CUnit - A unit testing framework for C - Version 2.1-3 00:20:18.875 http://cunit.sourceforge.net/ 00:20:18.875 00:20:18.875 00:20:18.875 Suite: bdevio tests on: Nvme1n1 00:20:19.132 Test: blockdev write read block ...passed 00:20:19.132 Test: blockdev write zeroes read block ...passed 00:20:19.132 Test: blockdev write zeroes read no split ...passed 00:20:19.132 Test: blockdev write zeroes read split ...passed 00:20:19.132 Test: blockdev write zeroes read split partial ...passed 00:20:19.132 Test: blockdev reset ...[2024-05-15 00:57:06.067291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.132 [2024-05-15 00:57:06.067402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f600 (9): Bad file descriptor 00:20:19.389 [2024-05-15 00:57:06.212576] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.389 passed 00:20:19.389 Test: blockdev write read 8 blocks ...passed 00:20:19.389 Test: blockdev write read size > 128k ...passed 00:20:19.389 Test: blockdev write read invalid size ...passed 00:20:19.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.389 Test: blockdev write read max offset ...passed 00:20:19.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.389 Test: blockdev writev readv 8 blocks ...passed 00:20:19.389 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.389 Test: blockdev writev readv block ...passed 00:20:19.389 Test: blockdev writev readv size > 128k ...passed 00:20:19.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.389 Test: blockdev comparev and writev ...[2024-05-15 00:57:06.430899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.430942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.430962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.430972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.431313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.431323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.431337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.431345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.431686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.431710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.432047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.432057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.389 [2024-05-15 00:57:06.432070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.389 [2024-05-15 00:57:06.432079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.648 passed 00:20:19.648 Test: blockdev nvme passthru rw ...passed 00:20:19.648 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:57:06.515466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.648 [2024-05-15 00:57:06.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:19.648 [2024-05-15 00:57:06.515626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.648 [2024-05-15 00:57:06.515640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.648 [2024-05-15 00:57:06.515764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.648 [2024-05-15 00:57:06.515773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.648 [2024-05-15 00:57:06.515909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.648 [2024-05-15 00:57:06.515918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:19.648 passed 00:20:19.648 Test: blockdev nvme admin passthru ...passed 00:20:19.648 Test: blockdev copy ...passed 00:20:19.648 00:20:19.648 Run Summary: Type Total Ran Passed Failed Inactive 00:20:19.648 suites 1 1 n/a 0 0 00:20:19.648 tests 23 23 23 0 0 00:20:19.648 asserts 152 152 152 0 n/a 00:20:19.648 00:20:19.648 Elapsed time = 1.308 seconds 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:19.906 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:19.906 rmmod nvme_tcp 00:20:19.906 rmmod nvme_fabrics 00:20:19.906 rmmod nvme_keyring 00:20:20.164 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3504306 ']' 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3504306 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3504306 ']' 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3504306 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.165 00:57:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3504306 00:20:20.165 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:20.165 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:20.165 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3504306' 00:20:20.165 killing process with pid 3504306 00:20:20.165 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3504306 00:20:20.165 [2024-05-15 00:57:07.035968] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:20.165 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3504306 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.422 00:57:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.957 00:57:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.957 00:20:22.957 real 0m10.885s 00:20:22.957 user 0m14.611s 00:20:22.957 sys 0m5.210s 00:20:22.957 00:57:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:22.957 00:57:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 ************************************ 00:20:22.957 END TEST nvmf_bdevio_no_huge 00:20:22.957 ************************************ 00:20:22.957 00:57:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.957 00:57:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:22.957 00:57:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:22.957 00:57:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 ************************************ 00:20:22.957 START TEST nvmf_tls 00:20:22.957 ************************************ 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.957 * Looking for test storage... 00:20:22.957 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.957 00:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:29.528 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:29.528 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:29.528 Found net devices under 0000:27:00.0: cvl_0_0 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:29.528 Found net devices under 0000:27:00.1: cvl_0_1 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.528 00:57:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:20:29.528 00:20:29.528 --- 10.0.0.2 ping statistics --- 00:20:29.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.528 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:20:29.528 00:20:29.528 --- 10.0.0.1 ping statistics --- 00:20:29.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.528 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:29.528 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3509316 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3509316 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3509316 ']' 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:29.529 [2024-05-15 00:57:16.263337] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:29.529 [2024-05-15 00:57:16.263469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.529 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.529 [2024-05-15 00:57:16.426879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.529 [2024-05-15 00:57:16.582115] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.529 [2024-05-15 00:57:16.582178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.529 [2024-05-15 00:57:16.582196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.529 [2024-05-15 00:57:16.582213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.529 [2024-05-15 00:57:16.582226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.529 [2024-05-15 00:57:16.582275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:30.094 00:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:30.094 true 00:20:30.094 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.094 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:30.353 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:30.353 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:30.353 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:30.611 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.611 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:30.611 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:30.611 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:30.611 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.869 00:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:31.128 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:31.128 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:31.128 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:31.128 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.128 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:31.387 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:31.387 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:31.387 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:31.387 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.387 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8HUh6wOULl 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6PwqVFwMos 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8HUh6wOULl 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6PwqVFwMos 00:20:31.646 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:31.905 00:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:32.163 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8HUh6wOULl 00:20:32.163 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8HUh6wOULl 00:20:32.163 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.426 [2024-05-15 00:57:19.251105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.426 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.426 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:32.687 [2024-05-15 00:57:19.555080] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:32.687 [2024-05-15 00:57:19.555173] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.687 [2024-05-15 00:57:19.555402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.687 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.687 malloc0 00:20:32.945 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.945 00:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8HUh6wOULl 00:20:33.203 [2024-05-15 00:57:20.047627] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:33.203 00:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8HUh6wOULl 00:20:33.203 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.247 Initializing NVMe Controllers 00:20:43.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.247 Initialization complete. Launching workers. 00:20:43.247 ======================================================== 00:20:43.247 Latency(us) 00:20:43.247 Device Information : IOPS MiB/s Average min max 00:20:43.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17222.65 67.28 3716.38 1073.62 5417.58 00:20:43.247 ======================================================== 00:20:43.247 Total : 17222.65 67.28 3716.38 1073.62 5417.58 00:20:43.247 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HUh6wOULl 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8HUh6wOULl' 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3512032 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3512032 /var/tmp/bdevperf.sock 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3512032 ']' 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:43.247 00:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.505 [2024-05-15 00:57:30.321854] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:43.505 [2024-05-15 00:57:30.321972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512032 ] 00:20:43.505 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.505 [2024-05-15 00:57:30.432914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.505 [2024-05-15 00:57:30.528452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.073 00:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:44.073 00:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:44.073 00:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8HUh6wOULl 00:20:44.332 [2024-05-15 00:57:31.183802] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.332 [2024-05-15 00:57:31.183940] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.332 TLSTESTn1 00:20:44.332 00:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:44.332 Running I/O for 10 seconds... 00:20:54.332 00:20:54.332 Latency(us) 00:20:54.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.332 Verification LBA range: start 0x0 length 0x2000 00:20:54.332 TLSTESTn1 : 10.02 5597.57 21.87 0.00 0.00 22832.40 4828.97 38079.87 00:20:54.332 =================================================================================================================== 00:20:54.332 Total : 5597.57 21.87 0.00 0.00 22832.40 4828.97 38079.87 00:20:54.332 0 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3512032 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3512032 ']' 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3512032 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:54.332 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3512032 00:20:54.590 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:54.590 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:54.590 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3512032' 00:20:54.590 killing process with pid 3512032 00:20:54.591 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3512032 00:20:54.591 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.591 00:20:54.591 Latency(us) 00:20:54.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.591 =================================================================================================================== 00:20:54.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.591 [2024-05-15 00:57:41.427747] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.591 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3512032 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6PwqVFwMos 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6PwqVFwMos 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6PwqVFwMos 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6PwqVFwMos' 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3514364 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3514364 /var/tmp/bdevperf.sock 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3514364 ']' 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.848 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.849 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.849 00:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 00:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.849 [2024-05-15 00:57:41.889334] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:54.849 [2024-05-15 00:57:41.889447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514364 ] 00:20:55.106 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.106 [2024-05-15 00:57:42.001794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.106 [2024-05-15 00:57:42.097026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.671 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.671 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:55.671 00:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6PwqVFwMos 00:20:55.930 [2024-05-15 00:57:42.733987] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.930 [2024-05-15 00:57:42.734147] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.930 [2024-05-15 00:57:42.742102] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:55.930 [2024-05-15 00:57:42.742158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:55.930 [2024-05-15 00:57:42.743136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:55.930 [2024-05-15 00:57:42.744131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:55.930 [2024-05-15 00:57:42.744148] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:55.930 [2024-05-15 00:57:42.744164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.930 request: 00:20:55.930 { 00:20:55.930 "name": "TLSTEST", 00:20:55.930 "trtype": "tcp", 00:20:55.930 "traddr": "10.0.0.2", 00:20:55.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.930 "adrfam": "ipv4", 00:20:55.930 "trsvcid": "4420", 00:20:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.930 "psk": "/tmp/tmp.6PwqVFwMos", 00:20:55.930 "method": "bdev_nvme_attach_controller", 00:20:55.930 "req_id": 1 00:20:55.930 } 00:20:55.930 Got JSON-RPC error response 00:20:55.930 response: 00:20:55.930 { 00:20:55.930 "code": -32602, 00:20:55.930 "message": "Invalid parameters" 00:20:55.930 } 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3514364 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3514364 ']' 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3514364 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3514364 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3514364' 00:20:55.930 killing process with pid 3514364 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3514364 00:20:55.930 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.930 00:20:55.930 Latency(us) 00:20:55.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.930 =================================================================================================================== 00:20:55.930 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.930 [2024-05-15 00:57:42.810893] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:55.930 00:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3514364 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8HUh6wOULl 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8HUh6wOULl 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8HUh6wOULl 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8HUh6wOULl' 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3514561 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3514561 /var/tmp/bdevperf.sock 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3514561 ']' 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.187 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.445 [2024-05-15 00:57:43.290203] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:56.445 [2024-05-15 00:57:43.290343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514561 ] 00:20:56.445 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.445 [2024-05-15 00:57:43.414713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.702 [2024-05-15 00:57:43.509894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.960 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.960 00:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:56.960 00:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8HUh6wOULl 00:20:57.217 [2024-05-15 00:57:44.114429] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.217 [2024-05-15 00:57:44.114567] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.217 [2024-05-15 00:57:44.121912] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:57.217 [2024-05-15 00:57:44.121947] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:57.217 [2024-05-15 00:57:44.121990] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.217 [2024-05-15 00:57:44.122301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:57.217 [2024-05-15 00:57:44.123283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:57.217 [2024-05-15 00:57:44.124277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.217 [2024-05-15 00:57:44.124294] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:57.217 [2024-05-15 00:57:44.124307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.217 request: 00:20:57.217 { 00:20:57.217 "name": "TLSTEST", 00:20:57.217 "trtype": "tcp", 00:20:57.217 "traddr": "10.0.0.2", 00:20:57.217 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.217 "adrfam": "ipv4", 00:20:57.217 "trsvcid": "4420", 00:20:57.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.217 "psk": "/tmp/tmp.8HUh6wOULl", 00:20:57.217 "method": "bdev_nvme_attach_controller", 00:20:57.217 "req_id": 1 00:20:57.217 } 00:20:57.217 Got JSON-RPC error response 00:20:57.217 response: 00:20:57.217 { 00:20:57.217 "code": -32602, 00:20:57.217 "message": "Invalid parameters" 00:20:57.217 } 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3514561 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3514561 ']' 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3514561 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3514561 00:20:57.217 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:57.218 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:57.218 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3514561' 00:20:57.218 killing process with pid 3514561 00:20:57.218 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3514561 00:20:57.218 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.218 00:20:57.218 Latency(us) 00:20:57.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.218 =================================================================================================================== 00:20:57.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.218 [2024-05-15 00:57:44.188101] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.218 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3514561 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HUh6wOULl 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HUh6wOULl 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HUh6wOULl 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8HUh6wOULl' 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3514778 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3514778 /var/tmp/bdevperf.sock 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3514778 ']' 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:57.783 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.784 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:57.784 00:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.784 00:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.784 [2024-05-15 00:57:44.661777] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:57.784 [2024-05-15 00:57:44.661927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514778 ] 00:20:57.784 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.784 [2024-05-15 00:57:44.793221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.042 [2024-05-15 00:57:44.891001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.300 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:58.300 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:58.300 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8HUh6wOULl 00:20:58.559 [2024-05-15 00:57:45.458486] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.559 [2024-05-15 00:57:45.458607] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.559 [2024-05-15 00:57:45.471380] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.559 [2024-05-15 00:57:45.471408] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.560 [2024-05-15 00:57:45.471441] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.560 [2024-05-15 00:57:45.472009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (107): Transport endpoint is not connected 00:20:58.560 [2024-05-15 00:57:45.472990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:20:58.560 [2024-05-15 00:57:45.473983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:58.560 [2024-05-15 00:57:45.473998] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.560 [2024-05-15 00:57:45.474010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:58.560 request: 00:20:58.560 { 00:20:58.560 "name": "TLSTEST", 00:20:58.560 "trtype": "tcp", 00:20:58.560 "traddr": "10.0.0.2", 00:20:58.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.560 "adrfam": "ipv4", 00:20:58.560 "trsvcid": "4420", 00:20:58.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.560 "psk": "/tmp/tmp.8HUh6wOULl", 00:20:58.560 "method": "bdev_nvme_attach_controller", 00:20:58.560 "req_id": 1 00:20:58.560 } 00:20:58.560 Got JSON-RPC error response 00:20:58.560 response: 00:20:58.560 { 00:20:58.560 "code": -32602, 00:20:58.560 "message": "Invalid parameters" 00:20:58.560 } 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3514778 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3514778 ']' 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3514778 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3514778 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3514778' 00:20:58.560 killing process with pid 3514778 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3514778 00:20:58.560 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.560 00:20:58.560 Latency(us) 00:20:58.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.560 =================================================================================================================== 00:20:58.560 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.560 [2024-05-15 00:57:45.529853] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.560 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3514778 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3515019 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3515019 /var/tmp/bdevperf.sock 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3515019 ']' 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.129 00:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.129 [2024-05-15 00:57:45.973932] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:59.129 [2024-05-15 00:57:45.974088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515019 ] 00:20:59.129 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.129 [2024-05-15 00:57:46.108186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.388 [2024-05-15 00:57:46.205278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.648 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:59.648 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:59.648 00:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:59.907 [2024-05-15 00:57:46.832463] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.907 [2024-05-15 00:57:46.834299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:20:59.907 [2024-05-15 00:57:46.835293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:59.907 [2024-05-15 00:57:46.835309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.907 [2024-05-15 00:57:46.835324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.907 request: 00:20:59.907 { 00:20:59.907 "name": "TLSTEST", 00:20:59.907 "trtype": "tcp", 00:20:59.907 "traddr": "10.0.0.2", 00:20:59.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.908 "adrfam": "ipv4", 00:20:59.908 "trsvcid": "4420", 00:20:59.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.908 "method": "bdev_nvme_attach_controller", 00:20:59.908 "req_id": 1 00:20:59.908 } 00:20:59.908 Got JSON-RPC error response 00:20:59.908 response: 00:20:59.908 { 00:20:59.908 "code": -32602, 00:20:59.908 "message": "Invalid parameters" 00:20:59.908 } 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3515019 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3515019 ']' 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3515019 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515019 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515019' 00:20:59.908 killing process with pid 3515019 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3515019 00:20:59.908 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.908 00:20:59.908 Latency(us) 00:20:59.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.908 =================================================================================================================== 00:20:59.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.908 00:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3515019 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3509316 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3509316 ']' 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3509316 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3509316 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3509316' 00:21:00.475 killing process with pid 3509316 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3509316 00:21:00.475 [2024-05-15 00:57:47.303297] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:00.475 [2024-05-15 00:57:47.303342] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.475 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3509316 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.18FsmkDu3W 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.18FsmkDu3W 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3515480 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3515480 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3515480 ']' 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.045 00:57:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.045 [2024-05-15 00:57:47.990573] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:01.045 [2024-05-15 00:57:47.990723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.045 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.304 [2024-05-15 00:57:48.120791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.304 [2024-05-15 00:57:48.219640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.304 [2024-05-15 00:57:48.219681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.304 [2024-05-15 00:57:48.219691] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.304 [2024-05-15 00:57:48.219707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.304 [2024-05-15 00:57:48.219715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.304 [2024-05-15 00:57:48.219743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.18FsmkDu3W 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.871 [2024-05-15 00:57:48.810575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.871 00:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.128 00:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.129 [2024-05-15 00:57:49.070590] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:02.129 [2024-05-15 00:57:49.070687] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.129 [2024-05-15 00:57:49.070902] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.129 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.388 malloc0 00:21:02.388 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.388 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:02.648 [2024-05-15 00:57:49.515246] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.18FsmkDu3W 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.18FsmkDu3W' 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3515821 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3515821 /var/tmp/bdevperf.sock 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3515821 ']' 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.648 00:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.648 [2024-05-15 00:57:49.585408] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:02.648 [2024-05-15 00:57:49.585501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515821 ] 00:21:02.648 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.648 [2024-05-15 00:57:49.676247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.908 [2024-05-15 00:57:49.773453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.477 00:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:03.478 00:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:03.478 00:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:03.478 [2024-05-15 00:57:50.436123] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.478 [2024-05-15 00:57:50.436244] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.478 TLSTESTn1 00:21:03.478 00:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:03.735 Running I/O for 10 seconds... 00:21:13.714 00:21:13.714 Latency(us) 00:21:13.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.714 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.714 Verification LBA range: start 0x0 length 0x2000 00:21:13.714 TLSTESTn1 : 10.02 5703.70 22.28 0.00 0.00 22404.65 4691.00 41391.16 00:21:13.714 =================================================================================================================== 00:21:13.714 Total : 5703.70 22.28 0.00 0.00 22404.65 4691.00 41391.16 00:21:13.714 0 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3515821 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3515821 ']' 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3515821 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515821 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:13.714 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:13.715 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515821' 00:21:13.715 killing process with pid 3515821 00:21:13.715 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3515821 00:21:13.715 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.715 00:21:13.715 Latency(us) 00:21:13.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.715 =================================================================================================================== 00:21:13.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.715 [2024-05-15 00:58:00.651665] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.715 00:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3515821 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.18FsmkDu3W 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.18FsmkDu3W 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.18FsmkDu3W 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.18FsmkDu3W 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.18FsmkDu3W' 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3518018 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3518018 /var/tmp/bdevperf.sock 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3518018 ']' 00:21:14.281 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.282 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.282 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.282 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.282 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.282 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.282 [2024-05-15 00:58:01.117423] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:14.282 [2024-05-15 00:58:01.117537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518018 ] 00:21:14.282 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.282 [2024-05-15 00:58:01.230917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.282 [2024-05-15 00:58:01.324617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.848 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:14.848 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:14.848 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:15.108 [2024-05-15 00:58:01.947623] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.108 [2024-05-15 00:58:01.947705] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:15.108 [2024-05-15 00:58:01.947718] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.18FsmkDu3W 00:21:15.108 request: 00:21:15.108 { 00:21:15.108 "name": "TLSTEST", 00:21:15.108 "trtype": "tcp", 00:21:15.108 "traddr": "10.0.0.2", 00:21:15.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.108 "adrfam": "ipv4", 00:21:15.108 "trsvcid": "4420", 00:21:15.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.108 "psk": "/tmp/tmp.18FsmkDu3W", 00:21:15.108 "method": "bdev_nvme_attach_controller", 00:21:15.108 "req_id": 1 00:21:15.108 } 00:21:15.108 Got JSON-RPC error response 00:21:15.108 response: 00:21:15.108 { 00:21:15.108 "code": -1, 00:21:15.108 "message": "Operation not permitted" 00:21:15.108 } 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3518018 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3518018 ']' 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3518018 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:15.108 00:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3518018 00:21:15.108 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:15.108 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:15.108 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3518018' 00:21:15.108 killing process with pid 3518018 00:21:15.108 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3518018 00:21:15.108 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.108 00:21:15.108 Latency(us) 00:21:15.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.108 =================================================================================================================== 00:21:15.108 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:15.108 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3518018 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3515480 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3515480 ']' 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3515480 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515480 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515480' 00:21:15.372 killing process with pid 3515480 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3515480 00:21:15.372 [2024-05-15 00:58:02.422506] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:15.372 [2024-05-15 00:58:02.422572] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:15.372 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3515480 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3518336 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3518336 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3518336 ']' 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 00:58:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.995 [2024-05-15 00:58:03.000624] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:15.995 [2024-05-15 00:58:03.000725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.253 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.253 [2024-05-15 00:58:03.123529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.253 [2024-05-15 00:58:03.221675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.253 [2024-05-15 00:58:03.221721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.253 [2024-05-15 00:58:03.221731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.253 [2024-05-15 00:58:03.221741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.253 [2024-05-15 00:58:03.221749] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.253 [2024-05-15 00:58:03.221780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.823 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.18FsmkDu3W 00:21:16.824 00:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.824 [2024-05-15 00:58:03.867360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.084 00:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:17.084 00:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:17.350 [2024-05-15 00:58:04.163374] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:17.350 [2024-05-15 00:58:04.163469] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.350 [2024-05-15 00:58:04.163707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.351 00:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:17.351 malloc0 00:21:17.351 00:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:17.612 [2024-05-15 00:58:04.624879] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:17.612 [2024-05-15 00:58:04.624909] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:17.612 [2024-05-15 00:58:04.624933] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:17.612 request: 00:21:17.612 { 00:21:17.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.612 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.612 "psk": "/tmp/tmp.18FsmkDu3W", 00:21:17.612 "method": "nvmf_subsystem_add_host", 00:21:17.612 "req_id": 1 00:21:17.612 } 00:21:17.612 Got JSON-RPC error response 00:21:17.612 response: 00:21:17.612 { 00:21:17.612 "code": -32603, 00:21:17.612 "message": "Internal error" 00:21:17.612 } 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3518336 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3518336 ']' 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3518336 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.612 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3518336 00:21:17.869 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:17.869 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:17.869 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3518336' 00:21:17.869 killing process with pid 3518336 00:21:17.869 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3518336 00:21:17.869 [2024-05-15 00:58:04.684576] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:17.869 00:58:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3518336 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.18FsmkDu3W 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3518944 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3518944 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3518944 ']' 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.127 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.386 [2024-05-15 00:58:05.252641] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:18.387 [2024-05-15 00:58:05.252746] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.387 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.387 [2024-05-15 00:58:05.372807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.646 [2024-05-15 00:58:05.464549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.646 [2024-05-15 00:58:05.464593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.646 [2024-05-15 00:58:05.464602] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.646 [2024-05-15 00:58:05.464611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.646 [2024-05-15 00:58:05.464619] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.646 [2024-05-15 00:58:05.464654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.906 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.906 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:18.906 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.906 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.906 00:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.166 00:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.166 00:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:19.166 00:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.18FsmkDu3W 00:21:19.166 00:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:19.166 [2024-05-15 00:58:06.118597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.166 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:19.427 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:19.427 [2024-05-15 00:58:06.418624] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:19.427 [2024-05-15 00:58:06.418738] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.427 [2024-05-15 00:58:06.418988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.427 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:19.687 malloc0 00:21:19.687 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:19.948 [2024-05-15 00:58:06.889666] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3519272 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3519272 /var/tmp/bdevperf.sock 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3519272 ']' 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.948 00:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.948 [2024-05-15 00:58:06.990261] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:19.948 [2024-05-15 00:58:06.990412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519272 ] 00:21:20.209 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.209 [2024-05-15 00:58:07.118717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.209 [2024-05-15 00:58:07.214815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.777 00:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:20.777 00:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:20.777 00:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:20.777 [2024-05-15 00:58:07.830080] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.777 [2024-05-15 00:58:07.830201] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.036 TLSTESTn1 00:21:21.036 00:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:21:21.298 00:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:21.298 "subsystems": [ 00:21:21.298 { 00:21:21.298 "subsystem": "keyring", 00:21:21.298 "config": [] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "iobuf", 00:21:21.298 "config": [ 00:21:21.298 { 00:21:21.298 "method": "iobuf_set_options", 00:21:21.298 "params": { 00:21:21.298 "small_pool_count": 8192, 00:21:21.298 "large_pool_count": 1024, 00:21:21.298 "small_bufsize": 8192, 00:21:21.298 "large_bufsize": 135168 00:21:21.298 } 00:21:21.298 } 00:21:21.298 ] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "sock", 00:21:21.298 "config": [ 00:21:21.298 { 00:21:21.298 "method": "sock_impl_set_options", 00:21:21.298 "params": { 00:21:21.298 "impl_name": "posix", 00:21:21.298 "recv_buf_size": 2097152, 00:21:21.298 "send_buf_size": 2097152, 00:21:21.298 "enable_recv_pipe": true, 00:21:21.298 "enable_quickack": false, 00:21:21.298 "enable_placement_id": 0, 00:21:21.298 "enable_zerocopy_send_server": true, 00:21:21.298 "enable_zerocopy_send_client": false, 00:21:21.298 "zerocopy_threshold": 0, 00:21:21.298 "tls_version": 0, 00:21:21.298 "enable_ktls": false 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "sock_impl_set_options", 00:21:21.298 "params": { 00:21:21.298 "impl_name": "ssl", 00:21:21.298 "recv_buf_size": 4096, 00:21:21.298 "send_buf_size": 4096, 00:21:21.298 "enable_recv_pipe": true, 00:21:21.298 "enable_quickack": false, 00:21:21.298 "enable_placement_id": 0, 00:21:21.298 "enable_zerocopy_send_server": true, 00:21:21.298 "enable_zerocopy_send_client": false, 00:21:21.298 "zerocopy_threshold": 0, 00:21:21.298 "tls_version": 0, 00:21:21.298 "enable_ktls": false 00:21:21.298 } 00:21:21.298 } 00:21:21.298 ] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "vmd", 00:21:21.298 "config": [] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "accel", 00:21:21.298 "config": [ 00:21:21.298 { 00:21:21.298 "method": "accel_set_options", 00:21:21.298 "params": { 00:21:21.298 "small_cache_size": 128, 00:21:21.298 "large_cache_size": 16, 00:21:21.298 "task_count": 2048, 00:21:21.298 "sequence_count": 2048, 00:21:21.298 "buf_count": 2048 00:21:21.298 } 00:21:21.298 } 00:21:21.298 ] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "bdev", 00:21:21.298 "config": [ 00:21:21.298 { 00:21:21.298 "method": "bdev_set_options", 00:21:21.298 "params": { 00:21:21.298 "bdev_io_pool_size": 65535, 00:21:21.298 "bdev_io_cache_size": 256, 00:21:21.298 "bdev_auto_examine": true, 00:21:21.298 "iobuf_small_cache_size": 128, 00:21:21.298 "iobuf_large_cache_size": 16 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_raid_set_options", 00:21:21.298 "params": { 00:21:21.298 "process_window_size_kb": 1024 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_iscsi_set_options", 00:21:21.298 "params": { 00:21:21.298 "timeout_sec": 30 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_nvme_set_options", 00:21:21.298 "params": { 00:21:21.298 "action_on_timeout": "none", 00:21:21.298 "timeout_us": 0, 00:21:21.298 "timeout_admin_us": 0, 00:21:21.298 "keep_alive_timeout_ms": 10000, 00:21:21.298 "arbitration_burst": 0, 00:21:21.298 "low_priority_weight": 0, 00:21:21.298 "medium_priority_weight": 0, 00:21:21.298 "high_priority_weight": 0, 00:21:21.298 "nvme_adminq_poll_period_us": 10000, 00:21:21.298 "nvme_ioq_poll_period_us": 0, 00:21:21.298 "io_queue_requests": 0, 00:21:21.298 "delay_cmd_submit": true, 00:21:21.298 "transport_retry_count": 4, 00:21:21.298 "bdev_retry_count": 3, 00:21:21.298 "transport_ack_timeout": 0, 00:21:21.298 "ctrlr_loss_timeout_sec": 0, 00:21:21.298 "reconnect_delay_sec": 0, 00:21:21.298 "fast_io_fail_timeout_sec": 0, 00:21:21.298 "disable_auto_failback": false, 00:21:21.298 "generate_uuids": false, 00:21:21.298 "transport_tos": 0, 00:21:21.298 "nvme_error_stat": false, 00:21:21.298 "rdma_srq_size": 0, 00:21:21.298 "io_path_stat": false, 00:21:21.298 "allow_accel_sequence": false, 00:21:21.298 "rdma_max_cq_size": 0, 00:21:21.298 "rdma_cm_event_timeout_ms": 0, 00:21:21.298 "dhchap_digests": [ 00:21:21.298 "sha256", 00:21:21.298 "sha384", 00:21:21.298 "sha512" 00:21:21.298 ], 00:21:21.298 "dhchap_dhgroups": [ 00:21:21.298 "null", 00:21:21.298 "ffdhe2048", 00:21:21.298 "ffdhe3072", 00:21:21.298 "ffdhe4096", 00:21:21.298 "ffdhe6144", 00:21:21.298 "ffdhe8192" 00:21:21.298 ] 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_nvme_set_hotplug", 00:21:21.298 "params": { 00:21:21.298 "period_us": 100000, 00:21:21.298 "enable": false 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_malloc_create", 00:21:21.298 "params": { 00:21:21.298 "name": "malloc0", 00:21:21.298 "num_blocks": 8192, 00:21:21.298 "block_size": 4096, 00:21:21.298 "physical_block_size": 4096, 00:21:21.298 "uuid": "919e85af-ea67-42d9-8b70-214f18530dab", 00:21:21.298 "optimal_io_boundary": 0 00:21:21.298 } 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "method": "bdev_wait_for_examine" 00:21:21.298 } 00:21:21.298 ] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "nbd", 00:21:21.298 "config": [] 00:21:21.298 }, 00:21:21.298 { 00:21:21.298 "subsystem": "scheduler", 00:21:21.298 "config": [ 00:21:21.298 { 00:21:21.298 "method": "framework_set_scheduler", 00:21:21.298 "params": { 00:21:21.298 "name": "static" 00:21:21.298 } 00:21:21.298 } 00:21:21.298 ] 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "subsystem": "nvmf", 00:21:21.299 "config": [ 00:21:21.299 { 00:21:21.299 "method": "nvmf_set_config", 00:21:21.299 "params": { 00:21:21.299 "discovery_filter": "match_any", 00:21:21.299 "admin_cmd_passthru": { 00:21:21.299 "identify_ctrlr": false 00:21:21.299 } 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_set_max_subsystems", 00:21:21.299 "params": { 00:21:21.299 "max_subsystems": 1024 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_set_crdt", 00:21:21.299 "params": { 00:21:21.299 "crdt1": 0, 00:21:21.299 "crdt2": 0, 00:21:21.299 "crdt3": 0 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_create_transport", 00:21:21.299 "params": { 00:21:21.299 "trtype": "TCP", 00:21:21.299 "max_queue_depth": 128, 00:21:21.299 "max_io_qpairs_per_ctrlr": 127, 00:21:21.299 "in_capsule_data_size": 4096, 00:21:21.299 "max_io_size": 131072, 00:21:21.299 "io_unit_size": 131072, 00:21:21.299 "max_aq_depth": 128, 00:21:21.299 "num_shared_buffers": 511, 00:21:21.299 "buf_cache_size": 4294967295, 00:21:21.299 "dif_insert_or_strip": false, 00:21:21.299 "zcopy": false, 00:21:21.299 "c2h_success": false, 00:21:21.299 "sock_priority": 0, 00:21:21.299 "abort_timeout_sec": 1, 00:21:21.299 "ack_timeout": 0, 00:21:21.299 "data_wr_pool_size": 0 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_create_subsystem", 00:21:21.299 "params": { 00:21:21.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.299 "allow_any_host": false, 00:21:21.299 "serial_number": "SPDK00000000000001", 00:21:21.299 "model_number": "SPDK bdev Controller", 00:21:21.299 "max_namespaces": 10, 00:21:21.299 "min_cntlid": 1, 00:21:21.299 "max_cntlid": 65519, 00:21:21.299 "ana_reporting": false 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_subsystem_add_host", 00:21:21.299 "params": { 00:21:21.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.299 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.299 "psk": "/tmp/tmp.18FsmkDu3W" 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_subsystem_add_ns", 00:21:21.299 "params": { 00:21:21.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.299 "namespace": { 00:21:21.299 "nsid": 1, 00:21:21.299 "bdev_name": "malloc0", 00:21:21.299 "nguid": "919E85AFEA6742D98B70214F18530DAB", 00:21:21.299 "uuid": "919e85af-ea67-42d9-8b70-214f18530dab", 00:21:21.299 "no_auto_visible": false 00:21:21.299 } 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "nvmf_subsystem_add_listener", 00:21:21.299 "params": { 00:21:21.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.299 "listen_address": { 00:21:21.299 "trtype": "TCP", 00:21:21.299 "adrfam": "IPv4", 00:21:21.299 "traddr": "10.0.0.2", 00:21:21.299 "trsvcid": "4420" 00:21:21.299 }, 00:21:21.299 "secure_channel": true 00:21:21.299 } 00:21:21.299 } 00:21:21.299 ] 00:21:21.299 } 00:21:21.299 ] 00:21:21.299 }' 00:21:21.299 00:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:21.299 00:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:21.299 "subsystems": [ 00:21:21.299 { 00:21:21.299 "subsystem": "keyring", 00:21:21.299 "config": [] 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "subsystem": "iobuf", 00:21:21.299 "config": [ 00:21:21.299 { 00:21:21.299 "method": "iobuf_set_options", 00:21:21.299 "params": { 00:21:21.299 "small_pool_count": 8192, 00:21:21.299 "large_pool_count": 1024, 00:21:21.299 "small_bufsize": 8192, 00:21:21.299 "large_bufsize": 135168 00:21:21.299 } 00:21:21.299 } 00:21:21.299 ] 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "subsystem": "sock", 00:21:21.299 "config": [ 00:21:21.299 { 00:21:21.299 "method": "sock_impl_set_options", 00:21:21.299 "params": { 00:21:21.299 "impl_name": "posix", 00:21:21.299 "recv_buf_size": 2097152, 00:21:21.299 "send_buf_size": 2097152, 00:21:21.299 "enable_recv_pipe": true, 00:21:21.299 "enable_quickack": false, 00:21:21.299 "enable_placement_id": 0, 00:21:21.299 "enable_zerocopy_send_server": true, 00:21:21.299 "enable_zerocopy_send_client": false, 00:21:21.299 "zerocopy_threshold": 0, 00:21:21.299 "tls_version": 0, 00:21:21.299 "enable_ktls": false 00:21:21.299 } 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "method": "sock_impl_set_options", 00:21:21.299 "params": { 00:21:21.299 "impl_name": "ssl", 00:21:21.299 "recv_buf_size": 4096, 00:21:21.299 "send_buf_size": 4096, 00:21:21.299 "enable_recv_pipe": true, 00:21:21.299 "enable_quickack": false, 00:21:21.299 "enable_placement_id": 0, 00:21:21.299 "enable_zerocopy_send_server": true, 00:21:21.299 "enable_zerocopy_send_client": false, 00:21:21.299 "zerocopy_threshold": 0, 00:21:21.299 "tls_version": 0, 00:21:21.299 "enable_ktls": false 00:21:21.299 } 00:21:21.299 } 00:21:21.299 ] 00:21:21.299 }, 00:21:21.299 { 00:21:21.299 "subsystem": "vmd", 00:21:21.300 "config": [] 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "subsystem": "accel", 00:21:21.300 "config": [ 00:21:21.300 { 00:21:21.300 "method": "accel_set_options", 00:21:21.300 "params": { 00:21:21.300 "small_cache_size": 128, 00:21:21.300 "large_cache_size": 16, 00:21:21.300 "task_count": 2048, 00:21:21.300 "sequence_count": 2048, 00:21:21.300 "buf_count": 2048 00:21:21.300 } 00:21:21.300 } 00:21:21.300 ] 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "subsystem": "bdev", 00:21:21.300 "config": [ 00:21:21.300 { 00:21:21.300 "method": "bdev_set_options", 00:21:21.300 "params": { 00:21:21.300 "bdev_io_pool_size": 65535, 00:21:21.300 "bdev_io_cache_size": 256, 00:21:21.300 "bdev_auto_examine": true, 00:21:21.300 "iobuf_small_cache_size": 128, 00:21:21.300 "iobuf_large_cache_size": 16 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_raid_set_options", 00:21:21.300 "params": { 00:21:21.300 "process_window_size_kb": 1024 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_iscsi_set_options", 00:21:21.300 "params": { 00:21:21.300 "timeout_sec": 30 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_nvme_set_options", 00:21:21.300 "params": { 00:21:21.300 "action_on_timeout": "none", 00:21:21.300 "timeout_us": 0, 00:21:21.300 "timeout_admin_us": 0, 00:21:21.300 "keep_alive_timeout_ms": 10000, 00:21:21.300 "arbitration_burst": 0, 00:21:21.300 "low_priority_weight": 0, 00:21:21.300 "medium_priority_weight": 0, 00:21:21.300 "high_priority_weight": 0, 00:21:21.300 "nvme_adminq_poll_period_us": 10000, 00:21:21.300 "nvme_ioq_poll_period_us": 0, 00:21:21.300 "io_queue_requests": 512, 00:21:21.300 "delay_cmd_submit": true, 00:21:21.300 "transport_retry_count": 4, 00:21:21.300 "bdev_retry_count": 3, 00:21:21.300 "transport_ack_timeout": 0, 00:21:21.300 "ctrlr_loss_timeout_sec": 0, 00:21:21.300 "reconnect_delay_sec": 0, 00:21:21.300 "fast_io_fail_timeout_sec": 0, 00:21:21.300 "disable_auto_failback": false, 00:21:21.300 "generate_uuids": false, 00:21:21.300 "transport_tos": 0, 00:21:21.300 "nvme_error_stat": false, 00:21:21.300 "rdma_srq_size": 0, 00:21:21.300 "io_path_stat": false, 00:21:21.300 "allow_accel_sequence": false, 00:21:21.300 "rdma_max_cq_size": 0, 00:21:21.300 "rdma_cm_event_timeout_ms": 0, 00:21:21.300 "dhchap_digests": [ 00:21:21.300 "sha256", 00:21:21.300 "sha384", 00:21:21.300 "sha512" 00:21:21.300 ], 00:21:21.300 "dhchap_dhgroups": [ 00:21:21.300 "null", 00:21:21.300 "ffdhe2048", 00:21:21.300 "ffdhe3072", 00:21:21.300 "ffdhe4096", 00:21:21.300 "ffdhe6144", 00:21:21.300 "ffdhe8192" 00:21:21.300 ] 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_nvme_attach_controller", 00:21:21.300 "params": { 00:21:21.300 "name": "TLSTEST", 00:21:21.300 "trtype": "TCP", 00:21:21.300 "adrfam": "IPv4", 00:21:21.300 "traddr": "10.0.0.2", 00:21:21.300 "trsvcid": "4420", 00:21:21.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.300 "prchk_reftag": false, 00:21:21.300 "prchk_guard": false, 00:21:21.300 "ctrlr_loss_timeout_sec": 0, 00:21:21.300 "reconnect_delay_sec": 0, 00:21:21.300 "fast_io_fail_timeout_sec": 0, 00:21:21.300 "psk": "/tmp/tmp.18FsmkDu3W", 00:21:21.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.300 "hdgst": false, 00:21:21.300 "ddgst": false 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_nvme_set_hotplug", 00:21:21.300 "params": { 00:21:21.300 "period_us": 100000, 00:21:21.300 "enable": false 00:21:21.300 } 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "method": "bdev_wait_for_examine" 00:21:21.300 } 00:21:21.300 ] 00:21:21.300 }, 00:21:21.300 { 00:21:21.300 "subsystem": "nbd", 00:21:21.300 "config": [] 00:21:21.300 } 00:21:21.300 ] 00:21:21.300 }' 00:21:21.300 00:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3519272 00:21:21.300 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3519272 ']' 00:21:21.300 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3519272 00:21:21.300 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:21.559 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.560 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3519272 00:21:21.560 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:21.560 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:21.560 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3519272' 00:21:21.560 killing process with pid 3519272 00:21:21.560 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3519272 00:21:21.560 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.560 00:21:21.560 Latency(us) 00:21:21.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.560 =================================================================================================================== 00:21:21.560 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.560 [2024-05-15 00:58:08.401702] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3519272 00:21:21.560 scheduled for removal in v24.09 hit 1 times 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3518944 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3518944 ']' 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3518944 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3518944 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3518944' 00:21:21.818 killing process with pid 3518944 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3518944 00:21:21.818 [2024-05-15 00:58:08.827279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:21.818 [2024-05-15 00:58:08.827335] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.818 00:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3518944 00:21:22.386 00:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:22.386 00:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.386 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:22.386 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.386 00:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:22.386 "subsystems": [ 00:21:22.386 { 00:21:22.386 "subsystem": "keyring", 00:21:22.386 "config": [] 00:21:22.386 }, 00:21:22.386 { 00:21:22.386 "subsystem": "iobuf", 00:21:22.386 "config": [ 00:21:22.386 { 00:21:22.386 "method": "iobuf_set_options", 00:21:22.386 "params": { 00:21:22.386 "small_pool_count": 8192, 00:21:22.386 "large_pool_count": 1024, 00:21:22.386 "small_bufsize": 8192, 00:21:22.386 "large_bufsize": 135168 00:21:22.386 } 00:21:22.386 } 00:21:22.386 ] 00:21:22.386 }, 00:21:22.386 { 00:21:22.386 "subsystem": "sock", 00:21:22.386 "config": [ 00:21:22.386 { 00:21:22.386 "method": "sock_impl_set_options", 00:21:22.386 "params": { 00:21:22.386 "impl_name": "posix", 00:21:22.386 "recv_buf_size": 2097152, 00:21:22.386 "send_buf_size": 2097152, 00:21:22.386 "enable_recv_pipe": true, 00:21:22.386 "enable_quickack": false, 00:21:22.386 "enable_placement_id": 0, 00:21:22.386 "enable_zerocopy_send_server": true, 00:21:22.386 "enable_zerocopy_send_client": false, 00:21:22.386 "zerocopy_threshold": 0, 00:21:22.386 "tls_version": 0, 00:21:22.386 "enable_ktls": false 00:21:22.386 } 00:21:22.386 }, 00:21:22.386 { 00:21:22.386 "method": "sock_impl_set_options", 00:21:22.386 "params": { 00:21:22.386 "impl_name": "ssl", 00:21:22.386 "recv_buf_size": 4096, 00:21:22.386 "send_buf_size": 4096, 00:21:22.386 "enable_recv_pipe": true, 00:21:22.386 "enable_quickack": false, 00:21:22.386 "enable_placement_id": 0, 00:21:22.386 "enable_zerocopy_send_server": true, 00:21:22.386 "enable_zerocopy_send_client": false, 00:21:22.386 "zerocopy_threshold": 0, 00:21:22.386 "tls_version": 0, 00:21:22.386 "enable_ktls": false 00:21:22.386 } 00:21:22.386 } 00:21:22.386 ] 00:21:22.386 }, 00:21:22.386 { 00:21:22.386 "subsystem": "vmd", 00:21:22.387 "config": [] 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "subsystem": "accel", 00:21:22.387 "config": [ 00:21:22.387 { 00:21:22.387 "method": "accel_set_options", 00:21:22.387 "params": { 00:21:22.387 "small_cache_size": 128, 00:21:22.387 "large_cache_size": 16, 00:21:22.387 "task_count": 2048, 00:21:22.387 "sequence_count": 2048, 00:21:22.387 "buf_count": 2048 00:21:22.387 } 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "subsystem": "bdev", 00:21:22.387 "config": [ 00:21:22.387 { 00:21:22.387 "method": "bdev_set_options", 00:21:22.387 "params": { 00:21:22.387 "bdev_io_pool_size": 65535, 00:21:22.387 "bdev_io_cache_size": 256, 00:21:22.387 "bdev_auto_examine": true, 00:21:22.387 "iobuf_small_cache_size": 128, 00:21:22.387 "iobuf_large_cache_size": 16 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_raid_set_options", 00:21:22.387 "params": { 00:21:22.387 "process_window_size_kb": 1024 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_iscsi_set_options", 00:21:22.387 "params": { 00:21:22.387 "timeout_sec": 30 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_nvme_set_options", 00:21:22.387 "params": { 00:21:22.387 "action_on_timeout": "none", 00:21:22.387 "timeout_us": 0, 00:21:22.387 "timeout_admin_us": 0, 00:21:22.387 "keep_alive_timeout_ms": 10000, 00:21:22.387 "arbitration_burst": 0, 00:21:22.387 "low_priority_weight": 0, 00:21:22.387 "medium_priority_weight": 0, 00:21:22.387 "high_priority_weight": 0, 00:21:22.387 "nvme_adminq_poll_period_us": 10000, 00:21:22.387 "nvme_ioq_poll_period_us": 0, 00:21:22.387 "io_queue_requests": 0, 00:21:22.387 "delay_cmd_submit": true, 00:21:22.387 "transport_retry_count": 4, 00:21:22.387 "bdev_retry_count": 3, 00:21:22.387 "transport_ack_timeout": 0, 00:21:22.387 "ctrlr_loss_timeout_sec": 0, 00:21:22.387 "reconnect_delay_sec": 0, 00:21:22.387 "fast_io_fail_timeout_sec": 0, 00:21:22.387 "disable_auto_failback": false, 00:21:22.387 "generate_uuids": false, 00:21:22.387 "transport_tos": 0, 00:21:22.387 "nvme_error_stat": false, 00:21:22.387 "rdma_srq_size": 0, 00:21:22.387 "io_path_stat": false, 00:21:22.387 "allow_accel_sequence": false, 00:21:22.387 "rdma_max_cq_size": 0, 00:21:22.387 "rdma_cm_event_timeout_ms": 0, 00:21:22.387 "dhchap_digests": [ 00:21:22.387 "sha256", 00:21:22.387 "sha384", 00:21:22.387 "sha512" 00:21:22.387 ], 00:21:22.387 "dhchap_dhgroups": [ 00:21:22.387 "null", 00:21:22.387 "ffdhe2048", 00:21:22.387 "ffdhe3072", 00:21:22.387 "ffdhe4096", 00:21:22.387 "ffdhe6144", 00:21:22.387 "ffdhe8192" 00:21:22.387 ] 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_nvme_set_hotplug", 00:21:22.387 "params": { 00:21:22.387 "period_us": 100000, 00:21:22.387 "enable": false 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_malloc_create", 00:21:22.387 "params": { 00:21:22.387 "name": "malloc0", 00:21:22.387 "num_blocks": 8192, 00:21:22.387 "block_size": 4096, 00:21:22.387 "physical_block_size": 4096, 00:21:22.387 "uuid": "919e85af-ea67-42d9-8b70-214f18530dab", 00:21:22.387 "optimal_io_boundary": 0 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "bdev_wait_for_examine" 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "subsystem": "nbd", 00:21:22.387 "config": [] 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "subsystem": "scheduler", 00:21:22.387 "config": [ 00:21:22.387 { 00:21:22.387 "method": "framework_set_scheduler", 00:21:22.387 "params": { 00:21:22.387 "name": "static" 00:21:22.387 } 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "subsystem": "nvmf", 00:21:22.387 "config": [ 00:21:22.387 { 00:21:22.387 "method": "nvmf_set_config", 00:21:22.387 "params": { 00:21:22.387 "discovery_filter": "match_any", 00:21:22.387 "admin_cmd_passthru": { 00:21:22.387 "identify_ctrlr": false 00:21:22.387 } 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_set_max_subsystems", 00:21:22.387 "params": { 00:21:22.387 "max_subsystems": 1024 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_set_crdt", 00:21:22.387 "params": { 00:21:22.387 "crdt1": 0, 00:21:22.387 "crdt2": 0, 00:21:22.387 "crdt3": 0 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_create_transport", 00:21:22.387 "params": { 00:21:22.387 "trtype": "TCP", 00:21:22.387 "max_queue_depth": 128, 00:21:22.387 "max_io_qpairs_per_ctrlr": 127, 00:21:22.387 "in_capsule_data_size": 4096, 00:21:22.387 "max_io_size": 131072, 00:21:22.387 "io_unit_size": 131072, 00:21:22.387 "max_aq_depth": 128, 00:21:22.387 "num_shared_buffers": 511, 00:21:22.387 "buf_cache_size": 4294967295, 00:21:22.387 "dif_insert_or_strip": false, 00:21:22.387 "zcopy": false, 00:21:22.387 "c2h_success": false, 00:21:22.387 "sock_priority": 0, 00:21:22.387 "abort_timeout_sec": 1, 00:21:22.387 "ack_timeout": 0, 00:21:22.387 "data_wr_pool_size": 0 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_create_subsystem", 00:21:22.387 "params": { 00:21:22.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.387 "allow_any_host": false, 00:21:22.387 "serial_number": "SPDK00000000000001", 00:21:22.387 "model_number": "SPDK bdev Controller", 00:21:22.387 "max_namespaces": 10, 00:21:22.387 "min_cntlid": 1, 00:21:22.387 "max_cntlid": 65519, 00:21:22.387 "ana_reporting": false 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_subsystem_add_host", 00:21:22.387 "params": { 00:21:22.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.387 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.387 "psk": "/tmp/tmp.18FsmkDu3W" 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_subsystem_add_ns", 00:21:22.387 "params": { 00:21:22.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.387 "namespace": { 00:21:22.387 "nsid": 1, 00:21:22.387 "bdev_name": "malloc0", 00:21:22.387 "nguid": "919E85AFEA6742D98B70214F18530DAB", 00:21:22.387 "uuid": "919e85af-ea67-42d9-8b70-214f18530dab", 00:21:22.387 "no_auto_visible": false 00:21:22.387 } 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "method": "nvmf_subsystem_add_listener", 00:21:22.387 "params": { 00:21:22.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.387 "listen_address": { 00:21:22.387 "trtype": "TCP", 00:21:22.387 "adrfam": "IPv4", 00:21:22.387 "traddr": "10.0.0.2", 00:21:22.387 "trsvcid": "4420" 00:21:22.387 }, 00:21:22.387 "secure_channel": true 00:21:22.387 } 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 }' 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3519727 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3519727 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3519727 ']' 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.387 00:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.387 [2024-05-15 00:58:09.397839] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:22.387 [2024-05-15 00:58:09.397944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.647 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.647 [2024-05-15 00:58:09.526807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.647 [2024-05-15 00:58:09.619649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.647 [2024-05-15 00:58:09.619704] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.647 [2024-05-15 00:58:09.619714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.647 [2024-05-15 00:58:09.619725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.647 [2024-05-15 00:58:09.619733] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.647 [2024-05-15 00:58:09.619832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.907 [2024-05-15 00:58:09.901121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.908 [2024-05-15 00:58:09.917069] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:22.908 [2024-05-15 00:58:09.933033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:22.908 [2024-05-15 00:58:09.933120] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.908 [2024-05-15 00:58:09.933361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3519901 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3519901 /var/tmp/bdevperf.sock 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3519901 ']' 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:23.169 00:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:23.169 "subsystems": [ 00:21:23.169 { 00:21:23.169 "subsystem": "keyring", 00:21:23.169 "config": [] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "iobuf", 00:21:23.169 "config": [ 00:21:23.169 { 00:21:23.169 "method": "iobuf_set_options", 00:21:23.169 "params": { 00:21:23.169 "small_pool_count": 8192, 00:21:23.169 "large_pool_count": 1024, 00:21:23.169 "small_bufsize": 8192, 00:21:23.169 "large_bufsize": 135168 00:21:23.169 } 00:21:23.169 } 00:21:23.169 ] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "sock", 00:21:23.169 "config": [ 00:21:23.169 { 00:21:23.169 "method": "sock_impl_set_options", 00:21:23.169 "params": { 00:21:23.169 "impl_name": "posix", 00:21:23.169 "recv_buf_size": 2097152, 00:21:23.169 "send_buf_size": 2097152, 00:21:23.169 "enable_recv_pipe": true, 00:21:23.169 "enable_quickack": false, 00:21:23.169 "enable_placement_id": 0, 00:21:23.169 "enable_zerocopy_send_server": true, 00:21:23.169 "enable_zerocopy_send_client": false, 00:21:23.169 "zerocopy_threshold": 0, 00:21:23.169 "tls_version": 0, 00:21:23.169 "enable_ktls": false 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "sock_impl_set_options", 00:21:23.169 "params": { 00:21:23.169 "impl_name": "ssl", 00:21:23.169 "recv_buf_size": 4096, 00:21:23.169 "send_buf_size": 4096, 00:21:23.169 "enable_recv_pipe": true, 00:21:23.169 "enable_quickack": false, 00:21:23.169 "enable_placement_id": 0, 00:21:23.169 "enable_zerocopy_send_server": true, 00:21:23.169 "enable_zerocopy_send_client": false, 00:21:23.169 "zerocopy_threshold": 0, 00:21:23.169 "tls_version": 0, 00:21:23.169 "enable_ktls": false 00:21:23.169 } 00:21:23.169 } 00:21:23.169 ] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "vmd", 00:21:23.169 "config": [] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "accel", 00:21:23.169 "config": [ 00:21:23.169 { 00:21:23.169 "method": "accel_set_options", 00:21:23.169 "params": { 00:21:23.169 "small_cache_size": 128, 00:21:23.169 "large_cache_size": 16, 00:21:23.169 "task_count": 2048, 00:21:23.169 "sequence_count": 2048, 00:21:23.169 "buf_count": 2048 00:21:23.169 } 00:21:23.169 } 00:21:23.169 ] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "bdev", 00:21:23.169 "config": [ 00:21:23.169 { 00:21:23.169 "method": "bdev_set_options", 00:21:23.169 "params": { 00:21:23.169 "bdev_io_pool_size": 65535, 00:21:23.169 "bdev_io_cache_size": 256, 00:21:23.169 "bdev_auto_examine": true, 00:21:23.169 "iobuf_small_cache_size": 128, 00:21:23.169 "iobuf_large_cache_size": 16 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_raid_set_options", 00:21:23.169 "params": { 00:21:23.169 "process_window_size_kb": 1024 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_iscsi_set_options", 00:21:23.169 "params": { 00:21:23.169 "timeout_sec": 30 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_nvme_set_options", 00:21:23.169 "params": { 00:21:23.169 "action_on_timeout": "none", 00:21:23.169 "timeout_us": 0, 00:21:23.169 "timeout_admin_us": 0, 00:21:23.169 "keep_alive_timeout_ms": 10000, 00:21:23.169 "arbitration_burst": 0, 00:21:23.169 "low_priority_weight": 0, 00:21:23.169 "medium_priority_weight": 0, 00:21:23.169 "high_priority_weight": 0, 00:21:23.169 "nvme_adminq_poll_period_us": 10000, 00:21:23.169 "nvme_ioq_poll_period_us": 0, 00:21:23.169 "io_queue_requests": 512, 00:21:23.169 "delay_cmd_submit": true, 00:21:23.169 "transport_retry_count": 4, 00:21:23.169 "bdev_retry_count": 3, 00:21:23.169 "transport_ack_timeout": 0, 00:21:23.169 "ctrlr_loss_timeout_sec": 0, 00:21:23.169 "reconnect_delay_sec": 0, 00:21:23.169 "fast_io_fail_timeout_sec": 0, 00:21:23.169 "disable_auto_failback": false, 00:21:23.169 "generate_uuids": false, 00:21:23.169 "transport_tos": 0, 00:21:23.169 "nvme_error_stat": false, 00:21:23.169 "rdma_srq_size": 0, 00:21:23.169 "io_path_stat": false, 00:21:23.169 "allow_accel_sequence": false, 00:21:23.169 "rdma_max_cq_size": 0, 00:21:23.169 "rdma_cm_event_timeout_ms": 0, 00:21:23.169 "dhchap_digests": [ 00:21:23.169 "sha256", 00:21:23.169 "sha384", 00:21:23.169 "sha512" 00:21:23.169 ], 00:21:23.169 "dhchap_dhgroups": [ 00:21:23.169 "null", 00:21:23.169 "ffdhe2048", 00:21:23.169 "ffdhe3072", 00:21:23.169 "ffdhe4096", 00:21:23.169 "ffdhe6144", 00:21:23.169 "ffdhe8192" 00:21:23.169 ] 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_nvme_attach_controller", 00:21:23.169 "params": { 00:21:23.169 "name": "TLSTEST", 00:21:23.169 "trtype": "TCP", 00:21:23.169 "adrfam": "IPv4", 00:21:23.169 "traddr": "10.0.0.2", 00:21:23.169 "trsvcid": "4420", 00:21:23.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.169 "prchk_reftag": false, 00:21:23.169 "prchk_guard": false, 00:21:23.169 "ctrlr_loss_timeout_sec": 0, 00:21:23.169 "reconnect_delay_sec": 0, 00:21:23.169 "fast_io_fail_timeout_sec": 0, 00:21:23.169 "psk": "/tmp/tmp.18FsmkDu3W", 00:21:23.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.169 "hdgst": false, 00:21:23.169 "ddgst": false 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_nvme_set_hotplug", 00:21:23.169 "params": { 00:21:23.169 "period_us": 100000, 00:21:23.169 "enable": false 00:21:23.169 } 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "method": "bdev_wait_for_examine" 00:21:23.169 } 00:21:23.169 ] 00:21:23.169 }, 00:21:23.169 { 00:21:23.169 "subsystem": "nbd", 00:21:23.169 "config": [] 00:21:23.169 } 00:21:23.169 ] 00:21:23.169 }' 00:21:23.169 [2024-05-15 00:58:10.224768] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:23.170 [2024-05-15 00:58:10.224914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519901 ] 00:21:23.427 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.427 [2024-05-15 00:58:10.355454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.427 [2024-05-15 00:58:10.452167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.685 [2024-05-15 00:58:10.662713] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.685 [2024-05-15 00:58:10.662823] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.943 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.943 00:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:23.943 00:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.202 Running I/O for 10 seconds... 00:21:34.176 00:21:34.176 Latency(us) 00:21:34.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.176 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.176 Verification LBA range: start 0x0 length 0x2000 00:21:34.176 TLSTESTn1 : 10.01 5683.92 22.20 0.00 0.00 22486.27 5967.23 38907.69 00:21:34.176 =================================================================================================================== 00:21:34.176 Total : 5683.92 22.20 0.00 0.00 22486.27 5967.23 38907.69 00:21:34.176 0 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3519901 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3519901 ']' 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3519901 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3519901 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3519901' 00:21:34.176 killing process with pid 3519901 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3519901 00:21:34.176 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.176 00:21:34.176 Latency(us) 00:21:34.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.176 =================================================================================================================== 00:21:34.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.176 [2024-05-15 00:58:21.096200] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.176 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3519901 00:21:34.434 00:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3519727 00:21:34.434 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3519727 ']' 00:21:34.434 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3519727 00:21:34.434 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:34.434 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3519727 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3519727' 00:21:34.692 killing process with pid 3519727 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3519727 00:21:34.692 [2024-05-15 00:58:21.530007] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:34.692 [2024-05-15 00:58:21.530083] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:34.692 00:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3519727 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3522268 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3522268 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3522268 ']' 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.261 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.261 [2024-05-15 00:58:22.144581] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:35.261 [2024-05-15 00:58:22.144718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.262 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.262 [2024-05-15 00:58:22.286160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.521 [2024-05-15 00:58:22.378279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.521 [2024-05-15 00:58:22.378335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.521 [2024-05-15 00:58:22.378345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.521 [2024-05-15 00:58:22.378356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.521 [2024-05-15 00:58:22.378364] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.521 [2024-05-15 00:58:22.378400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.087 00:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.088 00:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.18FsmkDu3W 00:21:36.088 00:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.18FsmkDu3W 00:21:36.088 00:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.088 [2024-05-15 00:58:23.003546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.088 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.345 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.345 [2024-05-15 00:58:23.271553] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:36.345 [2024-05-15 00:58:23.271657] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.345 [2024-05-15 00:58:23.271867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.345 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.602 malloc0 00:21:36.602 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.602 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.18FsmkDu3W 00:21:36.861 [2024-05-15 00:58:23.702183] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3522588 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3522588 /var/tmp/bdevperf.sock 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3522588 ']' 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.861 00:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.861 [2024-05-15 00:58:23.772872] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:36.861 [2024-05-15 00:58:23.772949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522588 ] 00:21:36.861 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.861 [2024-05-15 00:58:23.859494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.121 [2024-05-15 00:58:23.950618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.688 00:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:37.688 00:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:37.688 00:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.18FsmkDu3W 00:21:37.688 00:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:37.688 [2024-05-15 00:58:24.750420] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.946 nvme0n1 00:21:37.946 00:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.946 Running I/O for 1 seconds... 00:21:38.880 00:21:38.880 Latency(us) 00:21:38.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.880 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:38.880 Verification LBA range: start 0x0 length 0x2000 00:21:38.880 nvme0n1 : 1.01 5442.14 21.26 0.00 0.00 23366.67 5035.92 78919.14 00:21:38.880 =================================================================================================================== 00:21:38.880 Total : 5442.14 21.26 0.00 0.00 23366.67 5035.92 78919.14 00:21:38.880 0 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3522588 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3522588 ']' 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3522588 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.881 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522588 00:21:39.139 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:39.139 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:39.139 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522588' 00:21:39.139 killing process with pid 3522588 00:21:39.139 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3522588 00:21:39.139 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.139 00:21:39.139 Latency(us) 00:21:39.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.139 =================================================================================================================== 00:21:39.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.139 00:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3522588 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3522268 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3522268 ']' 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3522268 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522268 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522268' 00:21:39.396 killing process with pid 3522268 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3522268 00:21:39.396 [2024-05-15 00:58:26.375467] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:39.396 [2024-05-15 00:58:26.375532] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:39.396 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3522268 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3523204 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3523204 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3523204 ']' 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.961 00:58:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:39.961 [2024-05-15 00:58:26.944971] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:39.961 [2024-05-15 00:58:26.945084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.961 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.220 [2024-05-15 00:58:27.060660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.220 [2024-05-15 00:58:27.151896] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.220 [2024-05-15 00:58:27.151934] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.220 [2024-05-15 00:58:27.151942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.220 [2024-05-15 00:58:27.151951] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.220 [2024-05-15 00:58:27.151958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.220 [2024-05-15 00:58:27.151983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.789 [2024-05-15 00:58:27.698430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.789 malloc0 00:21:40.789 [2024-05-15 00:58:27.749308] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:40.789 [2024-05-15 00:58:27.749394] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.789 [2024-05-15 00:58:27.749633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3523427 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3523427 /var/tmp/bdevperf.sock 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3523427 ']' 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:40.789 00:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.048 [2024-05-15 00:58:27.851936] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:41.048 [2024-05-15 00:58:27.852061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523427 ] 00:21:41.048 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.048 [2024-05-15 00:58:27.969311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.048 [2024-05-15 00:58:28.060624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.680 00:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:41.680 00:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:41.680 00:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.18FsmkDu3W 00:21:41.680 00:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:41.937 [2024-05-15 00:58:28.786650] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.937 nvme0n1 00:21:41.937 00:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.937 Running I/O for 1 seconds... 00:21:43.315 00:21:43.315 Latency(us) 00:21:43.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.316 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.316 Verification LBA range: start 0x0 length 0x2000 00:21:43.316 nvme0n1 : 1.01 4647.52 18.15 0.00 0.00 27371.04 4587.52 38079.87 00:21:43.316 =================================================================================================================== 00:21:43.316 Total : 4647.52 18.15 0.00 0.00 27371.04 4587.52 38079.87 00:21:43.316 0 00:21:43.316 00:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:43.316 00:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.316 00:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.316 00:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:43.316 "subsystems": [ 00:21:43.316 { 00:21:43.316 "subsystem": "keyring", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "keyring_file_add_key", 00:21:43.316 "params": { 00:21:43.316 "name": "key0", 00:21:43.316 "path": "/tmp/tmp.18FsmkDu3W" 00:21:43.316 } 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "iobuf", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "iobuf_set_options", 00:21:43.316 "params": { 00:21:43.316 "small_pool_count": 8192, 00:21:43.316 "large_pool_count": 1024, 00:21:43.316 "small_bufsize": 8192, 00:21:43.316 "large_bufsize": 135168 00:21:43.316 } 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "sock", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "sock_impl_set_options", 00:21:43.316 "params": { 00:21:43.316 "impl_name": "posix", 00:21:43.316 "recv_buf_size": 2097152, 00:21:43.316 "send_buf_size": 2097152, 00:21:43.316 "enable_recv_pipe": true, 00:21:43.316 "enable_quickack": false, 00:21:43.316 "enable_placement_id": 0, 00:21:43.316 "enable_zerocopy_send_server": true, 00:21:43.316 "enable_zerocopy_send_client": false, 00:21:43.316 "zerocopy_threshold": 0, 00:21:43.316 "tls_version": 0, 00:21:43.316 "enable_ktls": false 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "sock_impl_set_options", 00:21:43.316 "params": { 00:21:43.316 "impl_name": "ssl", 00:21:43.316 "recv_buf_size": 4096, 00:21:43.316 "send_buf_size": 4096, 00:21:43.316 "enable_recv_pipe": true, 00:21:43.316 "enable_quickack": false, 00:21:43.316 "enable_placement_id": 0, 00:21:43.316 "enable_zerocopy_send_server": true, 00:21:43.316 "enable_zerocopy_send_client": false, 00:21:43.316 "zerocopy_threshold": 0, 00:21:43.316 "tls_version": 0, 00:21:43.316 "enable_ktls": false 00:21:43.316 } 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "vmd", 00:21:43.316 "config": [] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "accel", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "accel_set_options", 00:21:43.316 "params": { 00:21:43.316 "small_cache_size": 128, 00:21:43.316 "large_cache_size": 16, 00:21:43.316 "task_count": 2048, 00:21:43.316 "sequence_count": 2048, 00:21:43.316 "buf_count": 2048 00:21:43.316 } 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "bdev", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "bdev_set_options", 00:21:43.316 "params": { 00:21:43.316 "bdev_io_pool_size": 65535, 00:21:43.316 "bdev_io_cache_size": 256, 00:21:43.316 "bdev_auto_examine": true, 00:21:43.316 "iobuf_small_cache_size": 128, 00:21:43.316 "iobuf_large_cache_size": 16 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_raid_set_options", 00:21:43.316 "params": { 00:21:43.316 "process_window_size_kb": 1024 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_iscsi_set_options", 00:21:43.316 "params": { 00:21:43.316 "timeout_sec": 30 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_nvme_set_options", 00:21:43.316 "params": { 00:21:43.316 "action_on_timeout": "none", 00:21:43.316 "timeout_us": 0, 00:21:43.316 "timeout_admin_us": 0, 00:21:43.316 "keep_alive_timeout_ms": 10000, 00:21:43.316 "arbitration_burst": 0, 00:21:43.316 "low_priority_weight": 0, 00:21:43.316 "medium_priority_weight": 0, 00:21:43.316 "high_priority_weight": 0, 00:21:43.316 "nvme_adminq_poll_period_us": 10000, 00:21:43.316 "nvme_ioq_poll_period_us": 0, 00:21:43.316 "io_queue_requests": 0, 00:21:43.316 "delay_cmd_submit": true, 00:21:43.316 "transport_retry_count": 4, 00:21:43.316 "bdev_retry_count": 3, 00:21:43.316 "transport_ack_timeout": 0, 00:21:43.316 "ctrlr_loss_timeout_sec": 0, 00:21:43.316 "reconnect_delay_sec": 0, 00:21:43.316 "fast_io_fail_timeout_sec": 0, 00:21:43.316 "disable_auto_failback": false, 00:21:43.316 "generate_uuids": false, 00:21:43.316 "transport_tos": 0, 00:21:43.316 "nvme_error_stat": false, 00:21:43.316 "rdma_srq_size": 0, 00:21:43.316 "io_path_stat": false, 00:21:43.316 "allow_accel_sequence": false, 00:21:43.316 "rdma_max_cq_size": 0, 00:21:43.316 "rdma_cm_event_timeout_ms": 0, 00:21:43.316 "dhchap_digests": [ 00:21:43.316 "sha256", 00:21:43.316 "sha384", 00:21:43.316 "sha512" 00:21:43.316 ], 00:21:43.316 "dhchap_dhgroups": [ 00:21:43.316 "null", 00:21:43.316 "ffdhe2048", 00:21:43.316 "ffdhe3072", 00:21:43.316 "ffdhe4096", 00:21:43.316 "ffdhe6144", 00:21:43.316 "ffdhe8192" 00:21:43.316 ] 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_nvme_set_hotplug", 00:21:43.316 "params": { 00:21:43.316 "period_us": 100000, 00:21:43.316 "enable": false 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_malloc_create", 00:21:43.316 "params": { 00:21:43.316 "name": "malloc0", 00:21:43.316 "num_blocks": 8192, 00:21:43.316 "block_size": 4096, 00:21:43.316 "physical_block_size": 4096, 00:21:43.316 "uuid": "f2464bc4-6570-4772-a951-00d60731d1cc", 00:21:43.316 "optimal_io_boundary": 0 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "bdev_wait_for_examine" 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "nbd", 00:21:43.316 "config": [] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "scheduler", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "framework_set_scheduler", 00:21:43.316 "params": { 00:21:43.316 "name": "static" 00:21:43.316 } 00:21:43.316 } 00:21:43.316 ] 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "subsystem": "nvmf", 00:21:43.316 "config": [ 00:21:43.316 { 00:21:43.316 "method": "nvmf_set_config", 00:21:43.316 "params": { 00:21:43.316 "discovery_filter": "match_any", 00:21:43.316 "admin_cmd_passthru": { 00:21:43.316 "identify_ctrlr": false 00:21:43.316 } 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_set_max_subsystems", 00:21:43.316 "params": { 00:21:43.316 "max_subsystems": 1024 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_set_crdt", 00:21:43.316 "params": { 00:21:43.316 "crdt1": 0, 00:21:43.316 "crdt2": 0, 00:21:43.316 "crdt3": 0 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_create_transport", 00:21:43.316 "params": { 00:21:43.316 "trtype": "TCP", 00:21:43.316 "max_queue_depth": 128, 00:21:43.316 "max_io_qpairs_per_ctrlr": 127, 00:21:43.316 "in_capsule_data_size": 4096, 00:21:43.316 "max_io_size": 131072, 00:21:43.316 "io_unit_size": 131072, 00:21:43.316 "max_aq_depth": 128, 00:21:43.316 "num_shared_buffers": 511, 00:21:43.316 "buf_cache_size": 4294967295, 00:21:43.316 "dif_insert_or_strip": false, 00:21:43.316 "zcopy": false, 00:21:43.316 "c2h_success": false, 00:21:43.316 "sock_priority": 0, 00:21:43.316 "abort_timeout_sec": 1, 00:21:43.316 "ack_timeout": 0, 00:21:43.316 "data_wr_pool_size": 0 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_create_subsystem", 00:21:43.316 "params": { 00:21:43.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.316 "allow_any_host": false, 00:21:43.316 "serial_number": "00000000000000000000", 00:21:43.316 "model_number": "SPDK bdev Controller", 00:21:43.316 "max_namespaces": 32, 00:21:43.316 "min_cntlid": 1, 00:21:43.316 "max_cntlid": 65519, 00:21:43.316 "ana_reporting": false 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_subsystem_add_host", 00:21:43.316 "params": { 00:21:43.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.316 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.316 "psk": "key0" 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_subsystem_add_ns", 00:21:43.316 "params": { 00:21:43.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.316 "namespace": { 00:21:43.316 "nsid": 1, 00:21:43.316 "bdev_name": "malloc0", 00:21:43.316 "nguid": "F2464BC465704772A95100D60731D1CC", 00:21:43.316 "uuid": "f2464bc4-6570-4772-a951-00d60731d1cc", 00:21:43.316 "no_auto_visible": false 00:21:43.316 } 00:21:43.316 } 00:21:43.316 }, 00:21:43.316 { 00:21:43.316 "method": "nvmf_subsystem_add_listener", 00:21:43.316 "params": { 00:21:43.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.317 "listen_address": { 00:21:43.317 "trtype": "TCP", 00:21:43.317 "adrfam": "IPv4", 00:21:43.317 "traddr": "10.0.0.2", 00:21:43.317 "trsvcid": "4420" 00:21:43.317 }, 00:21:43.317 "secure_channel": true 00:21:43.317 } 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }' 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:43.317 "subsystems": [ 00:21:43.317 { 00:21:43.317 "subsystem": "keyring", 00:21:43.317 "config": [ 00:21:43.317 { 00:21:43.317 "method": "keyring_file_add_key", 00:21:43.317 "params": { 00:21:43.317 "name": "key0", 00:21:43.317 "path": "/tmp/tmp.18FsmkDu3W" 00:21:43.317 } 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "iobuf", 00:21:43.317 "config": [ 00:21:43.317 { 00:21:43.317 "method": "iobuf_set_options", 00:21:43.317 "params": { 00:21:43.317 "small_pool_count": 8192, 00:21:43.317 "large_pool_count": 1024, 00:21:43.317 "small_bufsize": 8192, 00:21:43.317 "large_bufsize": 135168 00:21:43.317 } 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "sock", 00:21:43.317 "config": [ 00:21:43.317 { 00:21:43.317 "method": "sock_impl_set_options", 00:21:43.317 "params": { 00:21:43.317 "impl_name": "posix", 00:21:43.317 "recv_buf_size": 2097152, 00:21:43.317 "send_buf_size": 2097152, 00:21:43.317 "enable_recv_pipe": true, 00:21:43.317 "enable_quickack": false, 00:21:43.317 "enable_placement_id": 0, 00:21:43.317 "enable_zerocopy_send_server": true, 00:21:43.317 "enable_zerocopy_send_client": false, 00:21:43.317 "zerocopy_threshold": 0, 00:21:43.317 "tls_version": 0, 00:21:43.317 "enable_ktls": false 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "sock_impl_set_options", 00:21:43.317 "params": { 00:21:43.317 "impl_name": "ssl", 00:21:43.317 "recv_buf_size": 4096, 00:21:43.317 "send_buf_size": 4096, 00:21:43.317 "enable_recv_pipe": true, 00:21:43.317 "enable_quickack": false, 00:21:43.317 "enable_placement_id": 0, 00:21:43.317 "enable_zerocopy_send_server": true, 00:21:43.317 "enable_zerocopy_send_client": false, 00:21:43.317 "zerocopy_threshold": 0, 00:21:43.317 "tls_version": 0, 00:21:43.317 "enable_ktls": false 00:21:43.317 } 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "vmd", 00:21:43.317 "config": [] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "accel", 00:21:43.317 "config": [ 00:21:43.317 { 00:21:43.317 "method": "accel_set_options", 00:21:43.317 "params": { 00:21:43.317 "small_cache_size": 128, 00:21:43.317 "large_cache_size": 16, 00:21:43.317 "task_count": 2048, 00:21:43.317 "sequence_count": 2048, 00:21:43.317 "buf_count": 2048 00:21:43.317 } 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "bdev", 00:21:43.317 "config": [ 00:21:43.317 { 00:21:43.317 "method": "bdev_set_options", 00:21:43.317 "params": { 00:21:43.317 "bdev_io_pool_size": 65535, 00:21:43.317 "bdev_io_cache_size": 256, 00:21:43.317 "bdev_auto_examine": true, 00:21:43.317 "iobuf_small_cache_size": 128, 00:21:43.317 "iobuf_large_cache_size": 16 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_raid_set_options", 00:21:43.317 "params": { 00:21:43.317 "process_window_size_kb": 1024 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_iscsi_set_options", 00:21:43.317 "params": { 00:21:43.317 "timeout_sec": 30 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_nvme_set_options", 00:21:43.317 "params": { 00:21:43.317 "action_on_timeout": "none", 00:21:43.317 "timeout_us": 0, 00:21:43.317 "timeout_admin_us": 0, 00:21:43.317 "keep_alive_timeout_ms": 10000, 00:21:43.317 "arbitration_burst": 0, 00:21:43.317 "low_priority_weight": 0, 00:21:43.317 "medium_priority_weight": 0, 00:21:43.317 "high_priority_weight": 0, 00:21:43.317 "nvme_adminq_poll_period_us": 10000, 00:21:43.317 "nvme_ioq_poll_period_us": 0, 00:21:43.317 "io_queue_requests": 512, 00:21:43.317 "delay_cmd_submit": true, 00:21:43.317 "transport_retry_count": 4, 00:21:43.317 "bdev_retry_count": 3, 00:21:43.317 "transport_ack_timeout": 0, 00:21:43.317 "ctrlr_loss_timeout_sec": 0, 00:21:43.317 "reconnect_delay_sec": 0, 00:21:43.317 "fast_io_fail_timeout_sec": 0, 00:21:43.317 "disable_auto_failback": false, 00:21:43.317 "generate_uuids": false, 00:21:43.317 "transport_tos": 0, 00:21:43.317 "nvme_error_stat": false, 00:21:43.317 "rdma_srq_size": 0, 00:21:43.317 "io_path_stat": false, 00:21:43.317 "allow_accel_sequence": false, 00:21:43.317 "rdma_max_cq_size": 0, 00:21:43.317 "rdma_cm_event_timeout_ms": 0, 00:21:43.317 "dhchap_digests": [ 00:21:43.317 "sha256", 00:21:43.317 "sha384", 00:21:43.317 "sha512" 00:21:43.317 ], 00:21:43.317 "dhchap_dhgroups": [ 00:21:43.317 "null", 00:21:43.317 "ffdhe2048", 00:21:43.317 "ffdhe3072", 00:21:43.317 "ffdhe4096", 00:21:43.317 "ffdhe6144", 00:21:43.317 "ffdhe8192" 00:21:43.317 ] 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_nvme_attach_controller", 00:21:43.317 "params": { 00:21:43.317 "name": "nvme0", 00:21:43.317 "trtype": "TCP", 00:21:43.317 "adrfam": "IPv4", 00:21:43.317 "traddr": "10.0.0.2", 00:21:43.317 "trsvcid": "4420", 00:21:43.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.317 "prchk_reftag": false, 00:21:43.317 "prchk_guard": false, 00:21:43.317 "ctrlr_loss_timeout_sec": 0, 00:21:43.317 "reconnect_delay_sec": 0, 00:21:43.317 "fast_io_fail_timeout_sec": 0, 00:21:43.317 "psk": "key0", 00:21:43.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.317 "hdgst": false, 00:21:43.317 "ddgst": false 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_nvme_set_hotplug", 00:21:43.317 "params": { 00:21:43.317 "period_us": 100000, 00:21:43.317 "enable": false 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_enable_histogram", 00:21:43.317 "params": { 00:21:43.317 "name": "nvme0n1", 00:21:43.317 "enable": true 00:21:43.317 } 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "method": "bdev_wait_for_examine" 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }, 00:21:43.317 { 00:21:43.317 "subsystem": "nbd", 00:21:43.317 "config": [] 00:21:43.317 } 00:21:43.317 ] 00:21:43.317 }' 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3523427 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3523427 ']' 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3523427 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3523427 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3523427' 00:21:43.317 killing process with pid 3523427 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3523427 00:21:43.317 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.317 00:21:43.317 Latency(us) 00:21:43.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.317 =================================================================================================================== 00:21:43.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.317 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3523427 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3523204 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3523204 ']' 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3523204 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3523204 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3523204' 00:21:43.884 killing process with pid 3523204 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3523204 00:21:43.884 [2024-05-15 00:58:30.708961] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:43.884 00:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3523204 00:21:44.456 00:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:44.456 00:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.456 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:44.456 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.456 00:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:44.456 "subsystems": [ 00:21:44.456 { 00:21:44.456 "subsystem": "keyring", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "keyring_file_add_key", 00:21:44.456 "params": { 00:21:44.456 "name": "key0", 00:21:44.456 "path": "/tmp/tmp.18FsmkDu3W" 00:21:44.456 } 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "iobuf", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "iobuf_set_options", 00:21:44.456 "params": { 00:21:44.456 "small_pool_count": 8192, 00:21:44.456 "large_pool_count": 1024, 00:21:44.456 "small_bufsize": 8192, 00:21:44.456 "large_bufsize": 135168 00:21:44.456 } 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "sock", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "sock_impl_set_options", 00:21:44.456 "params": { 00:21:44.456 "impl_name": "posix", 00:21:44.456 "recv_buf_size": 2097152, 00:21:44.456 "send_buf_size": 2097152, 00:21:44.456 "enable_recv_pipe": true, 00:21:44.456 "enable_quickack": false, 00:21:44.456 "enable_placement_id": 0, 00:21:44.456 "enable_zerocopy_send_server": true, 00:21:44.456 "enable_zerocopy_send_client": false, 00:21:44.456 "zerocopy_threshold": 0, 00:21:44.456 "tls_version": 0, 00:21:44.456 "enable_ktls": false 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "sock_impl_set_options", 00:21:44.456 "params": { 00:21:44.456 "impl_name": "ssl", 00:21:44.456 "recv_buf_size": 4096, 00:21:44.456 "send_buf_size": 4096, 00:21:44.456 "enable_recv_pipe": true, 00:21:44.456 "enable_quickack": false, 00:21:44.456 "enable_placement_id": 0, 00:21:44.456 "enable_zerocopy_send_server": true, 00:21:44.456 "enable_zerocopy_send_client": false, 00:21:44.456 "zerocopy_threshold": 0, 00:21:44.456 "tls_version": 0, 00:21:44.456 "enable_ktls": false 00:21:44.456 } 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "vmd", 00:21:44.456 "config": [] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "accel", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "accel_set_options", 00:21:44.456 "params": { 00:21:44.456 "small_cache_size": 128, 00:21:44.456 "large_cache_size": 16, 00:21:44.456 "task_count": 2048, 00:21:44.456 "sequence_count": 2048, 00:21:44.456 "buf_count": 2048 00:21:44.456 } 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "bdev", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "bdev_set_options", 00:21:44.456 "params": { 00:21:44.456 "bdev_io_pool_size": 65535, 00:21:44.456 "bdev_io_cache_size": 256, 00:21:44.456 "bdev_auto_examine": true, 00:21:44.456 "iobuf_small_cache_size": 128, 00:21:44.456 "iobuf_large_cache_size": 16 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_raid_set_options", 00:21:44.456 "params": { 00:21:44.456 "process_window_size_kb": 1024 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_iscsi_set_options", 00:21:44.456 "params": { 00:21:44.456 "timeout_sec": 30 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_nvme_set_options", 00:21:44.456 "params": { 00:21:44.456 "action_on_timeout": "none", 00:21:44.456 "timeout_us": 0, 00:21:44.456 "timeout_admin_us": 0, 00:21:44.456 "keep_alive_timeout_ms": 10000, 00:21:44.456 "arbitration_burst": 0, 00:21:44.456 "low_priority_weight": 0, 00:21:44.456 "medium_priority_weight": 0, 00:21:44.456 "high_priority_weight": 0, 00:21:44.456 "nvme_adminq_poll_period_us": 10000, 00:21:44.456 "nvme_ioq_poll_period_us": 0, 00:21:44.456 "io_queue_requests": 0, 00:21:44.456 "delay_cmd_submit": true, 00:21:44.456 "transport_retry_count": 4, 00:21:44.456 "bdev_retry_count": 3, 00:21:44.456 "transport_ack_timeout": 0, 00:21:44.456 "ctrlr_loss_timeout_sec": 0, 00:21:44.456 "reconnect_delay_sec": 0, 00:21:44.456 "fast_io_fail_timeout_sec": 0, 00:21:44.456 "disable_auto_failback": false, 00:21:44.456 "generate_uuids": false, 00:21:44.456 "transport_tos": 0, 00:21:44.456 "nvme_error_stat": false, 00:21:44.456 "rdma_srq_size": 0, 00:21:44.456 "io_path_stat": false, 00:21:44.456 "allow_accel_sequence": false, 00:21:44.456 "rdma_max_cq_size": 0, 00:21:44.456 "rdma_cm_event_timeout_ms": 0, 00:21:44.456 "dhchap_digests": [ 00:21:44.456 "sha256", 00:21:44.456 "sha384", 00:21:44.456 "sha512" 00:21:44.456 ], 00:21:44.456 "dhchap_dhgroups": [ 00:21:44.456 "null", 00:21:44.456 "ffdhe2048", 00:21:44.456 "ffdhe3072", 00:21:44.456 "ffdhe4096", 00:21:44.456 "ffdhe6144", 00:21:44.456 "ffdhe8192" 00:21:44.456 ] 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_nvme_set_hotplug", 00:21:44.456 "params": { 00:21:44.456 "period_us": 100000, 00:21:44.456 "enable": false 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_malloc_create", 00:21:44.456 "params": { 00:21:44.456 "name": "malloc0", 00:21:44.456 "num_blocks": 8192, 00:21:44.456 "block_size": 4096, 00:21:44.456 "physical_block_size": 4096, 00:21:44.456 "uuid": "f2464bc4-6570-4772-a951-00d60731d1cc", 00:21:44.456 "optimal_io_boundary": 0 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "bdev_wait_for_examine" 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "nbd", 00:21:44.456 "config": [] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "scheduler", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "framework_set_scheduler", 00:21:44.456 "params": { 00:21:44.456 "name": "static" 00:21:44.456 } 00:21:44.456 } 00:21:44.456 ] 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "subsystem": "nvmf", 00:21:44.456 "config": [ 00:21:44.456 { 00:21:44.456 "method": "nvmf_set_config", 00:21:44.456 "params": { 00:21:44.456 "discovery_filter": "match_any", 00:21:44.456 "admin_cmd_passthru": { 00:21:44.456 "identify_ctrlr": false 00:21:44.456 } 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_set_max_subsystems", 00:21:44.456 "params": { 00:21:44.456 "max_subsystems": 1024 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_set_crdt", 00:21:44.456 "params": { 00:21:44.456 "crdt1": 0, 00:21:44.456 "crdt2": 0, 00:21:44.456 "crdt3": 0 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_create_transport", 00:21:44.456 "params": { 00:21:44.456 "trtype": "TCP", 00:21:44.456 "max_queue_depth": 128, 00:21:44.456 "max_io_qpairs_per_ctrlr": 127, 00:21:44.456 "in_capsule_data_size": 4096, 00:21:44.456 "max_io_size": 131072, 00:21:44.456 "io_unit_size": 131072, 00:21:44.456 "max_aq_depth": 128, 00:21:44.456 "num_shared_buffers": 511, 00:21:44.456 "buf_cache_size": 4294967295, 00:21:44.456 "dif_insert_or_strip": false, 00:21:44.456 "zcopy": false, 00:21:44.456 "c2h_success": false, 00:21:44.456 "sock_priority": 0, 00:21:44.456 "abort_timeout_sec": 1, 00:21:44.456 "ack_timeout": 0, 00:21:44.456 "data_wr_pool_size": 0 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_create_subsystem", 00:21:44.456 "params": { 00:21:44.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.456 "allow_any_host": false, 00:21:44.456 "serial_number": "00000000000000000000", 00:21:44.456 "model_number": "SPDK bdev Controller", 00:21:44.456 "max_namespaces": 32, 00:21:44.456 "min_cntlid": 1, 00:21:44.456 "max_cntlid": 65519, 00:21:44.456 "ana_reporting": false 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_subsystem_add_host", 00:21:44.456 "params": { 00:21:44.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.456 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.456 "psk": "key0" 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_subsystem_add_ns", 00:21:44.456 "params": { 00:21:44.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.456 "namespace": { 00:21:44.456 "nsid": 1, 00:21:44.456 "bdev_name": "malloc0", 00:21:44.456 "nguid": "F2464BC465704772A95100D60731D1CC", 00:21:44.456 "uuid": "f2464bc4-6570-4772-a951-00d60731d1cc", 00:21:44.456 "no_auto_visible": false 00:21:44.456 } 00:21:44.456 } 00:21:44.456 }, 00:21:44.456 { 00:21:44.456 "method": "nvmf_subsystem_add_listener", 00:21:44.456 "params": { 00:21:44.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.456 "listen_address": { 00:21:44.456 "trtype": "TCP", 00:21:44.456 "adrfam": "IPv4", 00:21:44.456 "traddr": "10.0.0.2", 00:21:44.457 "trsvcid": "4420" 00:21:44.457 }, 00:21:44.457 "secure_channel": true 00:21:44.457 } 00:21:44.457 } 00:21:44.457 ] 00:21:44.457 } 00:21:44.457 ] 00:21:44.457 }' 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3524128 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3524128 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3524128 ']' 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.457 00:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:44.457 [2024-05-15 00:58:31.294514] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:44.457 [2024-05-15 00:58:31.294634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.457 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.457 [2024-05-15 00:58:31.416570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.457 [2024-05-15 00:58:31.514478] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.457 [2024-05-15 00:58:31.514519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.457 [2024-05-15 00:58:31.514528] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.457 [2024-05-15 00:58:31.514538] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.457 [2024-05-15 00:58:31.514546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.457 [2024-05-15 00:58:31.514625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.025 [2024-05-15 00:58:31.805233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.025 [2024-05-15 00:58:31.837161] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:45.025 [2024-05-15 00:58:31.837243] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.025 [2024-05-15 00:58:31.837481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.025 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.025 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:45.025 00:58:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.025 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.025 00:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3524168 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3524168 /var/tmp/bdevperf.sock 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3524168 ']' 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:45.025 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:45.025 "subsystems": [ 00:21:45.025 { 00:21:45.025 "subsystem": "keyring", 00:21:45.025 "config": [ 00:21:45.025 { 00:21:45.025 "method": "keyring_file_add_key", 00:21:45.025 "params": { 00:21:45.025 "name": "key0", 00:21:45.025 "path": "/tmp/tmp.18FsmkDu3W" 00:21:45.025 } 00:21:45.025 } 00:21:45.025 ] 00:21:45.025 }, 00:21:45.025 { 00:21:45.025 "subsystem": "iobuf", 00:21:45.025 "config": [ 00:21:45.025 { 00:21:45.025 "method": "iobuf_set_options", 00:21:45.025 "params": { 00:21:45.025 "small_pool_count": 8192, 00:21:45.025 "large_pool_count": 1024, 00:21:45.025 "small_bufsize": 8192, 00:21:45.025 "large_bufsize": 135168 00:21:45.025 } 00:21:45.025 } 00:21:45.025 ] 00:21:45.025 }, 00:21:45.025 { 00:21:45.025 "subsystem": "sock", 00:21:45.025 "config": [ 00:21:45.025 { 00:21:45.025 "method": "sock_impl_set_options", 00:21:45.025 "params": { 00:21:45.025 "impl_name": "posix", 00:21:45.025 "recv_buf_size": 2097152, 00:21:45.025 "send_buf_size": 2097152, 00:21:45.025 "enable_recv_pipe": true, 00:21:45.025 "enable_quickack": false, 00:21:45.025 "enable_placement_id": 0, 00:21:45.025 "enable_zerocopy_send_server": true, 00:21:45.025 "enable_zerocopy_send_client": false, 00:21:45.025 "zerocopy_threshold": 0, 00:21:45.025 "tls_version": 0, 00:21:45.025 "enable_ktls": false 00:21:45.025 } 00:21:45.025 }, 00:21:45.025 { 00:21:45.025 "method": "sock_impl_set_options", 00:21:45.025 "params": { 00:21:45.025 "impl_name": "ssl", 00:21:45.025 "recv_buf_size": 4096, 00:21:45.025 "send_buf_size": 4096, 00:21:45.025 "enable_recv_pipe": true, 00:21:45.025 "enable_quickack": false, 00:21:45.025 "enable_placement_id": 0, 00:21:45.025 "enable_zerocopy_send_server": true, 00:21:45.025 "enable_zerocopy_send_client": false, 00:21:45.025 "zerocopy_threshold": 0, 00:21:45.025 "tls_version": 0, 00:21:45.025 "enable_ktls": false 00:21:45.025 } 00:21:45.025 } 00:21:45.025 ] 00:21:45.025 }, 00:21:45.025 { 00:21:45.025 "subsystem": "vmd", 00:21:45.025 "config": [] 00:21:45.025 }, 00:21:45.025 { 00:21:45.025 "subsystem": "accel", 00:21:45.025 "config": [ 00:21:45.025 { 00:21:45.025 "method": "accel_set_options", 00:21:45.025 "params": { 00:21:45.026 "small_cache_size": 128, 00:21:45.026 "large_cache_size": 16, 00:21:45.026 "task_count": 2048, 00:21:45.026 "sequence_count": 2048, 00:21:45.026 "buf_count": 2048 00:21:45.026 } 00:21:45.026 } 00:21:45.026 ] 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "subsystem": "bdev", 00:21:45.026 "config": [ 00:21:45.026 { 00:21:45.026 "method": "bdev_set_options", 00:21:45.026 "params": { 00:21:45.026 "bdev_io_pool_size": 65535, 00:21:45.026 "bdev_io_cache_size": 256, 00:21:45.026 "bdev_auto_examine": true, 00:21:45.026 "iobuf_small_cache_size": 128, 00:21:45.026 "iobuf_large_cache_size": 16 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_raid_set_options", 00:21:45.026 "params": { 00:21:45.026 "process_window_size_kb": 1024 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_iscsi_set_options", 00:21:45.026 "params": { 00:21:45.026 "timeout_sec": 30 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_nvme_set_options", 00:21:45.026 "params": { 00:21:45.026 "action_on_timeout": "none", 00:21:45.026 "timeout_us": 0, 00:21:45.026 "timeout_admin_us": 0, 00:21:45.026 "keep_alive_timeout_ms": 10000, 00:21:45.026 "arbitration_burst": 0, 00:21:45.026 "low_priority_weight": 0, 00:21:45.026 "medium_priority_weight": 0, 00:21:45.026 "high_priority_weight": 0, 00:21:45.026 "nvme_adminq_poll_period_us": 10000, 00:21:45.026 "nvme_ioq_poll_period_us": 0, 00:21:45.026 "io_queue_requests": 512, 00:21:45.026 "delay_cmd_submit": true, 00:21:45.026 "transport_retry_count": 4, 00:21:45.026 "bdev_retry_count": 3, 00:21:45.026 "transport_ack_timeout": 0, 00:21:45.026 "ctrlr_loss_timeout_sec": 0, 00:21:45.026 "reconnect_delay_sec": 0, 00:21:45.026 "fast_io_fail_timeout_sec": 0, 00:21:45.026 "disable_auto_failback": false, 00:21:45.026 "generate_uuids": false, 00:21:45.026 "transport_tos": 0, 00:21:45.026 "nvme_error_stat": false, 00:21:45.026 "rdma_srq_size": 0, 00:21:45.026 "io_path_stat": false, 00:21:45.026 "allow_accel_sequence": false, 00:21:45.026 "rdma_max_cq_size": 0, 00:21:45.026 "rdma_cm_event_timeout_ms": 0, 00:21:45.026 "dhchap_digests": [ 00:21:45.026 "sha256", 00:21:45.026 "sha384", 00:21:45.026 "sha512" 00:21:45.026 ], 00:21:45.026 "dhchap_dhgroups": [ 00:21:45.026 "null", 00:21:45.026 "ffdhe2048", 00:21:45.026 "ffdhe3072", 00:21:45.026 "ffdhe4096", 00:21:45.026 "ffdhe6144", 00:21:45.026 "ffdhe8192" 00:21:45.026 ] 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_nvme_attach_controller", 00:21:45.026 "params": { 00:21:45.026 "name": "nvme0", 00:21:45.026 "trtype": "TCP", 00:21:45.026 "adrfam": "IPv4", 00:21:45.026 "traddr": "10.0.0.2", 00:21:45.026 "trsvcid": "4420", 00:21:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.026 "prchk_reftag": false, 00:21:45.026 "prchk_guard": false, 00:21:45.026 "ctrlr_loss_timeout_sec": 0, 00:21:45.026 "reconnect_delay_sec": 0, 00:21:45.026 "fast_io_fail_timeout_sec": 0, 00:21:45.026 "psk": "key0", 00:21:45.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.026 "hdgst": false, 00:21:45.026 "ddgst": false 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_nvme_set_hotplug", 00:21:45.026 "params": { 00:21:45.026 "period_us": 100000, 00:21:45.026 "enable": false 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_enable_histogram", 00:21:45.026 "params": { 00:21:45.026 "name": "nvme0n1", 00:21:45.026 "enable": true 00:21:45.026 } 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "method": "bdev_wait_for_examine" 00:21:45.026 } 00:21:45.026 ] 00:21:45.026 }, 00:21:45.026 { 00:21:45.026 "subsystem": "nbd", 00:21:45.026 "config": [] 00:21:45.026 } 00:21:45.026 ] 00:21:45.026 }' 00:21:45.026 [2024-05-15 00:58:32.079572] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:45.026 [2024-05-15 00:58:32.079687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524168 ] 00:21:45.283 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.283 [2024-05-15 00:58:32.190418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.283 [2024-05-15 00:58:32.281587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.541 [2024-05-15 00:58:32.489310] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.801 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.801 00:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:45.801 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.801 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:46.060 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.060 00:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.060 Running I/O for 1 seconds... 00:21:46.996 00:21:46.996 Latency(us) 00:21:46.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.996 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:46.996 Verification LBA range: start 0x0 length 0x2000 00:21:46.997 nvme0n1 : 1.02 5462.21 21.34 0.00 0.00 23211.02 4691.00 29663.66 00:21:46.997 =================================================================================================================== 00:21:46.997 Total : 5462.21 21.34 0.00 0.00 23211.02 4691.00 29663.66 00:21:46.997 0 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:47.254 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:47.254 nvmf_trace.0 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3524168 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3524168 ']' 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3524168 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3524168 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3524168' 00:21:47.255 killing process with pid 3524168 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3524168 00:21:47.255 Received shutdown signal, test time was about 1.000000 seconds 00:21:47.255 00:21:47.255 Latency(us) 00:21:47.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.255 =================================================================================================================== 00:21:47.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.255 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3524168 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.515 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.515 rmmod nvme_tcp 00:21:47.515 rmmod nvme_fabrics 00:21:47.775 rmmod nvme_keyring 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3524128 ']' 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3524128 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3524128 ']' 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3524128 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3524128 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3524128' 00:21:47.775 killing process with pid 3524128 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3524128 00:21:47.775 [2024-05-15 00:58:34.667295] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:47.775 00:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3524128 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.345 00:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.252 00:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.252 00:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8HUh6wOULl /tmp/tmp.6PwqVFwMos /tmp/tmp.18FsmkDu3W 00:21:50.252 00:21:50.252 real 1m27.669s 00:21:50.252 user 2m16.682s 00:21:50.252 sys 0m23.835s 00:21:50.252 00:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:50.252 00:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.252 ************************************ 00:21:50.252 END TEST nvmf_tls 00:21:50.252 ************************************ 00:21:50.252 00:58:37 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:50.252 00:58:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:50.252 00:58:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:50.252 00:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.252 ************************************ 00:21:50.252 START TEST nvmf_fips 00:21:50.252 ************************************ 00:21:50.252 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:50.511 * Looking for test storage... 00:21:50.511 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.511 00:58:37 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:50.512 Error setting digest 00:21:50.512 00B2FA48867F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:50.512 00B2FA48867F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.512 00:58:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:55.788 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:55.788 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:55.788 Found net devices under 0000:27:00.0: cvl_0_0 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:55.788 Found net devices under 0000:27:00.1: cvl_0_1 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.788 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.789 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:21:56.050 00:21:56.050 --- 10.0.0.2 ping statistics --- 00:21:56.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.050 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:56.050 00:21:56.050 --- 10.0.0.1 ping statistics --- 00:21:56.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.050 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3528864 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3528864 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3528864 ']' 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.050 00:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.311 [2024-05-15 00:58:43.121574] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:56.311 [2024-05-15 00:58:43.121711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.311 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.311 [2024-05-15 00:58:43.282658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.573 [2024-05-15 00:58:43.425806] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.573 [2024-05-15 00:58:43.425868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.573 [2024-05-15 00:58:43.425884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.573 [2024-05-15 00:58:43.425899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.573 [2024-05-15 00:58:43.425912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.573 [2024-05-15 00:58:43.425963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.834 00:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:57.094 [2024-05-15 00:58:43.970581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.095 [2024-05-15 00:58:43.986464] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:57.095 [2024-05-15 00:58:43.986577] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.095 [2024-05-15 00:58:43.986871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.095 [2024-05-15 00:58:44.044165] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:57.095 malloc0 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3528971 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3528971 /var/tmp/bdevperf.sock 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3528971 ']' 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:57.095 00:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.355 [2024-05-15 00:58:44.185877] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:57.355 [2024-05-15 00:58:44.186033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528971 ] 00:21:57.355 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.355 [2024-05-15 00:58:44.319292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.355 [2024-05-15 00:58:44.416932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.926 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.926 00:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:57.926 00:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:58.184 [2024-05-15 00:58:45.007248] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.185 [2024-05-15 00:58:45.007395] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:58.185 TLSTESTn1 00:21:58.185 00:58:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.185 Running I/O for 10 seconds... 00:22:08.164 00:22:08.164 Latency(us) 00:22:08.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.164 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.164 Verification LBA range: start 0x0 length 0x2000 00:22:08.164 TLSTESTn1 : 10.01 5589.82 21.84 0.00 0.00 22865.17 5001.43 32009.16 00:22:08.164 =================================================================================================================== 00:22:08.164 Total : 5589.82 21.84 0.00 0.00 22865.17 5001.43 32009.16 00:22:08.164 0 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:22:08.164 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:08.164 nvmf_trace.0 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3528971 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3528971 ']' 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3528971 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3528971 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3528971' 00:22:08.423 killing process with pid 3528971 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3528971 00:22:08.423 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.423 00:22:08.423 Latency(us) 00:22:08.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.423 =================================================================================================================== 00:22:08.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.423 [2024-05-15 00:58:55.344622] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.423 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3528971 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.991 rmmod nvme_tcp 00:22:08.991 rmmod nvme_fabrics 00:22:08.991 rmmod nvme_keyring 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3528864 ']' 00:22:08.991 00:58:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3528864 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3528864 ']' 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3528864 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3528864 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3528864' 00:22:08.992 killing process with pid 3528864 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3528864 00:22:08.992 [2024-05-15 00:58:55.874864] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:08.992 [2024-05-15 00:58:55.874939] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.992 00:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3528864 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.557 00:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.551 00:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.551 00:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.551 00:22:11.551 real 0m21.143s 00:22:11.551 user 0m24.472s 00:22:11.551 sys 0m7.362s 00:22:11.551 00:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:11.551 00:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:11.551 ************************************ 00:22:11.551 END TEST nvmf_fips 00:22:11.551 ************************************ 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy-fallback == phy ]] 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:22:11.551 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:11.551 00:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.551 ************************************ 00:22:11.551 START TEST nvmf_multicontroller 00:22:11.551 ************************************ 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:11.551 * Looking for test storage... 00:22:11.551 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.551 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.552 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.552 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.552 00:58:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.552 00:58:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.812 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:11.812 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.812 00:58:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.812 00:58:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:17.087 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:17.087 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:17.087 Found net devices under 0000:27:00.0: cvl_0_0 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:17.087 Found net devices under 0000:27:00.1: cvl_0_1 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.087 00:59:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.087 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.087 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.088 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.088 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:22:17.347 00:22:17.347 --- 10.0.0.2 ping statistics --- 00:22:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.347 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:17.347 00:22:17.347 --- 10.0.0.1 ping statistics --- 00:22:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.347 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3535229 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3535229 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3535229 ']' 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.347 00:59:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:17.347 [2024-05-15 00:59:04.299834] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:17.347 [2024-05-15 00:59:04.299950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.347 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.605 [2024-05-15 00:59:04.432833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.605 [2024-05-15 00:59:04.531687] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.605 [2024-05-15 00:59:04.531725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.605 [2024-05-15 00:59:04.531734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.605 [2024-05-15 00:59:04.531744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.605 [2024-05-15 00:59:04.531752] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.605 [2024-05-15 00:59:04.531919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.605 [2024-05-15 00:59:04.532018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.605 [2024-05-15 00:59:04.532029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 [2024-05-15 00:59:05.054569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 Malloc0 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 [2024-05-15 00:59:05.130061] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:18.174 [2024-05-15 00:59:05.130318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 [2024-05-15 00:59:05.138172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.174 Malloc1 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.174 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3535539 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3535539 /var/tmp/bdevperf.sock 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3535539 ']' 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:19.110 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.110 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:19.110 00:59:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:19.110 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.110 00:59:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 NVMe0n1 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 1 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 request: 00:22:19.370 { 00:22:19.370 "name": "NVMe0", 00:22:19.370 "trtype": "tcp", 00:22:19.370 "traddr": "10.0.0.2", 00:22:19.370 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:19.370 "hostaddr": "10.0.0.2", 00:22:19.370 "hostsvcid": "60000", 00:22:19.370 "adrfam": "ipv4", 00:22:19.370 "trsvcid": "4420", 00:22:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.370 "method": "bdev_nvme_attach_controller", 00:22:19.370 "req_id": 1 00:22:19.370 } 00:22:19.370 Got JSON-RPC error response 00:22:19.370 response: 00:22:19.370 { 00:22:19.370 "code": -114, 00:22:19.370 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.370 } 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 request: 00:22:19.370 { 00:22:19.370 "name": "NVMe0", 00:22:19.370 "trtype": "tcp", 00:22:19.370 "traddr": "10.0.0.2", 00:22:19.370 "hostaddr": "10.0.0.2", 00:22:19.370 "hostsvcid": "60000", 00:22:19.370 "adrfam": "ipv4", 00:22:19.370 "trsvcid": "4420", 00:22:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:19.370 "method": "bdev_nvme_attach_controller", 00:22:19.370 "req_id": 1 00:22:19.370 } 00:22:19.370 Got JSON-RPC error response 00:22:19.370 response: 00:22:19.370 { 00:22:19.370 "code": -114, 00:22:19.371 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.371 } 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 request: 00:22:19.371 { 00:22:19.371 "name": "NVMe0", 00:22:19.371 "trtype": "tcp", 00:22:19.371 "traddr": "10.0.0.2", 00:22:19.371 "hostaddr": "10.0.0.2", 00:22:19.371 "hostsvcid": "60000", 00:22:19.371 "adrfam": "ipv4", 00:22:19.371 "trsvcid": "4420", 00:22:19.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.371 "multipath": "disable", 00:22:19.371 "method": "bdev_nvme_attach_controller", 00:22:19.371 "req_id": 1 00:22:19.371 } 00:22:19.371 Got JSON-RPC error response 00:22:19.371 response: 00:22:19.371 { 00:22:19.371 "code": -114, 00:22:19.371 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:19.371 } 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 request: 00:22:19.371 { 00:22:19.371 "name": "NVMe0", 00:22:19.371 "trtype": "tcp", 00:22:19.371 "traddr": "10.0.0.2", 00:22:19.371 "hostaddr": "10.0.0.2", 00:22:19.371 "hostsvcid": "60000", 00:22:19.371 "adrfam": "ipv4", 00:22:19.371 "trsvcid": "4420", 00:22:19.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.371 "multipath": "failover", 00:22:19.371 "method": "bdev_nvme_attach_controller", 00:22:19.371 "req_id": 1 00:22:19.371 } 00:22:19.371 Got JSON-RPC error response 00:22:19.371 response: 00:22:19.371 { 00:22:19.371 "code": -114, 00:22:19.371 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:19.371 } 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.371 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.631 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.631 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.890 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:19.890 00:59:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.824 0 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3535539 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3535539 ']' 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3535539 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:20.824 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3535539 00:22:21.085 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:21.085 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:21.085 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3535539' 00:22:21.085 killing process with pid 3535539 00:22:21.085 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3535539 00:22:21.085 00:59:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3535539 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:22:21.346 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.346 [2024-05-15 00:59:05.278839] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:21.346 [2024-05-15 00:59:05.278959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535539 ] 00:22:21.346 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.346 [2024-05-15 00:59:05.390276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.346 [2024-05-15 00:59:05.481477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.346 [2024-05-15 00:59:06.721102] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 4b77772c-747e-4d7a-aced-6d53a01401e4 already exists 00:22:21.346 [2024-05-15 00:59:06.721148] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:4b77772c-747e-4d7a-aced-6d53a01401e4 alias for bdev NVMe1n1 00:22:21.346 [2024-05-15 00:59:06.721168] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:21.346 Running I/O for 1 seconds... 00:22:21.346 00:22:21.346 Latency(us) 00:22:21.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.346 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:21.346 NVMe0n1 : 1.00 25684.19 100.33 0.00 0.00 4976.86 4294.33 11796.48 00:22:21.346 =================================================================================================================== 00:22:21.346 Total : 25684.19 100.33 0.00 0.00 4976.86 4294.33 11796.48 00:22:21.346 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.346 00:22:21.346 Latency(us) 00:22:21.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.346 =================================================================================================================== 00:22:21.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.346 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.346 rmmod nvme_tcp 00:22:21.346 rmmod nvme_fabrics 00:22:21.346 rmmod nvme_keyring 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3535229 ']' 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3535229 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3535229 ']' 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3535229 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:21.346 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3535229 00:22:21.607 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:21.607 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:21.607 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3535229' 00:22:21.607 killing process with pid 3535229 00:22:21.607 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3535229 00:22:21.607 [2024-05-15 00:59:08.431620] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:21.607 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3535229 00:22:22.173 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.173 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.174 00:59:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.079 00:59:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.079 00:22:24.079 real 0m12.521s 00:22:24.079 user 0m17.874s 00:22:24.079 sys 0m4.937s 00:22:24.079 00:59:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:24.079 00:59:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.079 ************************************ 00:22:24.079 END TEST nvmf_multicontroller 00:22:24.079 ************************************ 00:22:24.079 00:59:11 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.079 00:59:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:24.079 00:59:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:24.079 00:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.079 ************************************ 00:22:24.079 START TEST nvmf_aer 00:22:24.079 ************************************ 00:22:24.079 00:59:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.339 * Looking for test storage... 00:22:24.339 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.339 00:59:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.916 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:30.917 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:30.917 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:30.917 Found net devices under 0000:27:00.0: cvl_0_0 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:30.917 Found net devices under 0000:27:00.1: cvl_0_1 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:22:30.917 00:22:30.917 --- 10.0.0.2 ping statistics --- 00:22:30.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.917 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:30.917 00:22:30.917 --- 10.0.0.1 ping statistics --- 00:22:30.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.917 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.917 00:59:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3540165 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3540165 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3540165 ']' 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.917 [2024-05-15 00:59:17.119365] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:30.917 [2024-05-15 00:59:17.119493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.917 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.917 [2024-05-15 00:59:17.258213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.917 [2024-05-15 00:59:17.356423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.917 [2024-05-15 00:59:17.356483] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.917 [2024-05-15 00:59:17.356494] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.917 [2024-05-15 00:59:17.356504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.917 [2024-05-15 00:59:17.356511] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.917 [2024-05-15 00:59:17.356603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.917 [2024-05-15 00:59:17.356699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.917 [2024-05-15 00:59:17.356800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.917 [2024-05-15 00:59:17.356811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.917 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 [2024-05-15 00:59:17.876005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 Malloc0 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 [2024-05-15 00:59:17.944597] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:30.918 [2024-05-15 00:59:17.944971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.918 [ 00:22:30.918 { 00:22:30.918 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:30.918 "subtype": "Discovery", 00:22:30.918 "listen_addresses": [], 00:22:30.918 "allow_any_host": true, 00:22:30.918 "hosts": [] 00:22:30.918 }, 00:22:30.918 { 00:22:30.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.918 "subtype": "NVMe", 00:22:30.918 "listen_addresses": [ 00:22:30.918 { 00:22:30.918 "trtype": "TCP", 00:22:30.918 "adrfam": "IPv4", 00:22:30.918 "traddr": "10.0.0.2", 00:22:30.918 "trsvcid": "4420" 00:22:30.918 } 00:22:30.918 ], 00:22:30.918 "allow_any_host": true, 00:22:30.918 "hosts": [], 00:22:30.918 "serial_number": "SPDK00000000000001", 00:22:30.918 "model_number": "SPDK bdev Controller", 00:22:30.918 "max_namespaces": 2, 00:22:30.918 "min_cntlid": 1, 00:22:30.918 "max_cntlid": 65519, 00:22:30.918 "namespaces": [ 00:22:30.918 { 00:22:30.918 "nsid": 1, 00:22:30.918 "bdev_name": "Malloc0", 00:22:30.918 "name": "Malloc0", 00:22:30.918 "nguid": "FB0BE0F255744FFFBECAB49D543EF071", 00:22:30.918 "uuid": "fb0be0f2-5574-4fff-beca-b49d543ef071" 00:22:30.918 } 00:22:30.918 ] 00:22:30.918 } 00:22:30.918 ] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3540344 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:30.918 00:59:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:31.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:22:31.178 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:31.438 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.438 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.438 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.439 Malloc1 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.439 [ 00:22:31.439 { 00:22:31.439 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.439 "subtype": "Discovery", 00:22:31.439 "listen_addresses": [], 00:22:31.439 "allow_any_host": true, 00:22:31.439 "hosts": [] 00:22:31.439 }, 00:22:31.439 { 00:22:31.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.439 "subtype": "NVMe", 00:22:31.439 "listen_addresses": [ 00:22:31.439 { 00:22:31.439 "trtype": "TCP", 00:22:31.439 "adrfam": "IPv4", 00:22:31.439 "traddr": "10.0.0.2", 00:22:31.439 "trsvcid": "4420" 00:22:31.439 } 00:22:31.439 ], 00:22:31.439 "allow_any_host": true, 00:22:31.439 "hosts": [], 00:22:31.439 "serial_number": "SPDK00000000000001", 00:22:31.439 "model_number": "SPDK bdev Controller", 00:22:31.439 "max_namespaces": 2, 00:22:31.439 "min_cntlid": 1, 00:22:31.439 "max_cntlid": 65519, 00:22:31.439 "namespaces": [ 00:22:31.439 { 00:22:31.439 "nsid": 1, 00:22:31.439 "bdev_name": "Malloc0", 00:22:31.439 "name": "Malloc0", 00:22:31.439 "nguid": "FB0BE0F255744FFFBECAB49D543EF071", 00:22:31.439 "uuid": "fb0be0f2-5574-4fff-beca-b49d543ef071" 00:22:31.439 }, 00:22:31.439 { 00:22:31.439 "nsid": 2, 00:22:31.439 "bdev_name": "Malloc1", 00:22:31.439 "name": "Malloc1", 00:22:31.439 "nguid": "0A054A31325B46CEB731FD297BE69430", 00:22:31.439 "uuid": "0a054a31-325b-46ce-b731-fd297be69430" 00:22:31.439 } 00:22:31.439 ] 00:22:31.439 } 00:22:31.439 ] 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3540344 00:22:31.439 Asynchronous Event Request test 00:22:31.439 Attaching to 10.0.0.2 00:22:31.439 Attached to 10.0.0.2 00:22:31.439 Registering asynchronous event callbacks... 00:22:31.439 Starting namespace attribute notice tests for all controllers... 00:22:31.439 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:31.439 aer_cb - Changed Namespace 00:22:31.439 Cleaning up... 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.439 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.697 rmmod nvme_tcp 00:22:31.697 rmmod nvme_fabrics 00:22:31.697 rmmod nvme_keyring 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3540165 ']' 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3540165 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3540165 ']' 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3540165 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3540165 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3540165' 00:22:31.697 killing process with pid 3540165 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3540165 00:22:31.697 [2024-05-15 00:59:18.681557] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:31.697 00:59:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3540165 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.261 00:59:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.166 00:59:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.166 00:22:34.166 real 0m10.099s 00:22:34.166 user 0m8.322s 00:22:34.166 sys 0m4.874s 00:22:34.166 00:59:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:34.166 00:59:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.166 ************************************ 00:22:34.166 END TEST nvmf_aer 00:22:34.166 ************************************ 00:22:34.427 00:59:21 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:34.427 00:59:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:34.427 00:59:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:34.427 00:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.427 ************************************ 00:22:34.427 START TEST nvmf_async_init 00:22:34.427 ************************************ 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:34.427 * Looking for test storage... 00:22:34.427 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d44274ef7da541c2b40d1aebe3c6630b 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.427 00:59:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:41.004 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:41.004 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:41.004 Found net devices under 0000:27:00.0: cvl_0_0 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:41.004 Found net devices under 0000:27:00.1: cvl_0_1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:22:41.004 00:22:41.004 --- 10.0.0.2 ping statistics --- 00:22:41.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.004 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:22:41.004 00:22:41.004 --- 10.0.0.1 ping statistics --- 00:22:41.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.004 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.004 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3544527 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3544527 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3544527 ']' 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:41.005 00:59:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.005 [2024-05-15 00:59:27.794022] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:41.005 [2024-05-15 00:59:27.794158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.005 [2024-05-15 00:59:27.936700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.005 [2024-05-15 00:59:28.036804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.005 [2024-05-15 00:59:28.036859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.005 [2024-05-15 00:59:28.036870] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.005 [2024-05-15 00:59:28.036881] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.005 [2024-05-15 00:59:28.036889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.005 [2024-05-15 00:59:28.036923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 [2024-05-15 00:59:28.555950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 null0 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d44274ef7da541c2b40d1aebe3c6630b 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.576 [2024-05-15 00:59:28.599878] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:41.576 [2024-05-15 00:59:28.600200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.576 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.577 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:41.577 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.577 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.837 nvme0n1 00:22:41.837 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.838 [ 00:22:41.838 { 00:22:41.838 "name": "nvme0n1", 00:22:41.838 "aliases": [ 00:22:41.838 "d44274ef-7da5-41c2-b40d-1aebe3c6630b" 00:22:41.838 ], 00:22:41.838 "product_name": "NVMe disk", 00:22:41.838 "block_size": 512, 00:22:41.838 "num_blocks": 2097152, 00:22:41.838 "uuid": "d44274ef-7da5-41c2-b40d-1aebe3c6630b", 00:22:41.838 "assigned_rate_limits": { 00:22:41.838 "rw_ios_per_sec": 0, 00:22:41.838 "rw_mbytes_per_sec": 0, 00:22:41.838 "r_mbytes_per_sec": 0, 00:22:41.838 "w_mbytes_per_sec": 0 00:22:41.838 }, 00:22:41.838 "claimed": false, 00:22:41.838 "zoned": false, 00:22:41.838 "supported_io_types": { 00:22:41.838 "read": true, 00:22:41.838 "write": true, 00:22:41.838 "unmap": false, 00:22:41.838 "write_zeroes": true, 00:22:41.838 "flush": true, 00:22:41.838 "reset": true, 00:22:41.838 "compare": true, 00:22:41.838 "compare_and_write": true, 00:22:41.838 "abort": true, 00:22:41.838 "nvme_admin": true, 00:22:41.838 "nvme_io": true 00:22:41.838 }, 00:22:41.838 "memory_domains": [ 00:22:41.838 { 00:22:41.838 "dma_device_id": "system", 00:22:41.838 "dma_device_type": 1 00:22:41.838 } 00:22:41.838 ], 00:22:41.838 "driver_specific": { 00:22:41.838 "nvme": [ 00:22:41.838 { 00:22:41.838 "trid": { 00:22:41.838 "trtype": "TCP", 00:22:41.838 "adrfam": "IPv4", 00:22:41.838 "traddr": "10.0.0.2", 00:22:41.838 "trsvcid": "4420", 00:22:41.838 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:41.838 }, 00:22:41.838 "ctrlr_data": { 00:22:41.838 "cntlid": 1, 00:22:41.838 "vendor_id": "0x8086", 00:22:41.838 "model_number": "SPDK bdev Controller", 00:22:41.838 "serial_number": "00000000000000000000", 00:22:41.838 "firmware_revision": "24.05", 00:22:41.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:41.838 "oacs": { 00:22:41.838 "security": 0, 00:22:41.838 "format": 0, 00:22:41.838 "firmware": 0, 00:22:41.838 "ns_manage": 0 00:22:41.838 }, 00:22:41.838 "multi_ctrlr": true, 00:22:41.838 "ana_reporting": false 00:22:41.838 }, 00:22:41.838 "vs": { 00:22:41.838 "nvme_version": "1.3" 00:22:41.838 }, 00:22:41.838 "ns_data": { 00:22:41.838 "id": 1, 00:22:41.838 "can_share": true 00:22:41.838 } 00:22:41.838 } 00:22:41.838 ], 00:22:41.838 "mp_policy": "active_passive" 00:22:41.838 } 00:22:41.838 } 00:22:41.838 ] 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.838 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.838 [2024-05-15 00:59:28.853651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:41.838 [2024-05-15 00:59:28.853740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:22:42.099 [2024-05-15 00:59:28.985150] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 [ 00:22:42.099 { 00:22:42.099 "name": "nvme0n1", 00:22:42.099 "aliases": [ 00:22:42.099 "d44274ef-7da5-41c2-b40d-1aebe3c6630b" 00:22:42.099 ], 00:22:42.099 "product_name": "NVMe disk", 00:22:42.099 "block_size": 512, 00:22:42.099 "num_blocks": 2097152, 00:22:42.099 "uuid": "d44274ef-7da5-41c2-b40d-1aebe3c6630b", 00:22:42.099 "assigned_rate_limits": { 00:22:42.099 "rw_ios_per_sec": 0, 00:22:42.099 "rw_mbytes_per_sec": 0, 00:22:42.099 "r_mbytes_per_sec": 0, 00:22:42.099 "w_mbytes_per_sec": 0 00:22:42.099 }, 00:22:42.099 "claimed": false, 00:22:42.099 "zoned": false, 00:22:42.099 "supported_io_types": { 00:22:42.099 "read": true, 00:22:42.099 "write": true, 00:22:42.099 "unmap": false, 00:22:42.099 "write_zeroes": true, 00:22:42.099 "flush": true, 00:22:42.099 "reset": true, 00:22:42.099 "compare": true, 00:22:42.099 "compare_and_write": true, 00:22:42.099 "abort": true, 00:22:42.099 "nvme_admin": true, 00:22:42.099 "nvme_io": true 00:22:42.099 }, 00:22:42.099 "memory_domains": [ 00:22:42.099 { 00:22:42.099 "dma_device_id": "system", 00:22:42.099 "dma_device_type": 1 00:22:42.099 } 00:22:42.099 ], 00:22:42.099 "driver_specific": { 00:22:42.099 "nvme": [ 00:22:42.099 { 00:22:42.099 "trid": { 00:22:42.099 "trtype": "TCP", 00:22:42.099 "adrfam": "IPv4", 00:22:42.099 "traddr": "10.0.0.2", 00:22:42.099 "trsvcid": "4420", 00:22:42.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.099 }, 00:22:42.099 "ctrlr_data": { 00:22:42.099 "cntlid": 2, 00:22:42.099 "vendor_id": "0x8086", 00:22:42.099 "model_number": "SPDK bdev Controller", 00:22:42.099 "serial_number": "00000000000000000000", 00:22:42.099 "firmware_revision": "24.05", 00:22:42.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.099 "oacs": { 00:22:42.099 "security": 0, 00:22:42.099 "format": 0, 00:22:42.099 "firmware": 0, 00:22:42.099 "ns_manage": 0 00:22:42.099 }, 00:22:42.099 "multi_ctrlr": true, 00:22:42.099 "ana_reporting": false 00:22:42.099 }, 00:22:42.099 "vs": { 00:22:42.099 "nvme_version": "1.3" 00:22:42.099 }, 00:22:42.099 "ns_data": { 00:22:42.099 "id": 1, 00:22:42.099 "can_share": true 00:22:42.099 } 00:22:42.099 } 00:22:42.099 ], 00:22:42.099 "mp_policy": "active_passive" 00:22:42.099 } 00:22:42.099 } 00:22:42.099 ] 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5Iqzli2MUM 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5Iqzli2MUM 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 [2024-05-15 00:59:29.033763] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.099 [2024-05-15 00:59:29.033910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Iqzli2MUM 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 [2024-05-15 00:59:29.041769] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Iqzli2MUM 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 [2024-05-15 00:59:29.049760] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.099 [2024-05-15 00:59:29.049836] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.099 nvme0n1 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.099 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.099 [ 00:22:42.099 { 00:22:42.099 "name": "nvme0n1", 00:22:42.099 "aliases": [ 00:22:42.099 "d44274ef-7da5-41c2-b40d-1aebe3c6630b" 00:22:42.099 ], 00:22:42.099 "product_name": "NVMe disk", 00:22:42.099 "block_size": 512, 00:22:42.099 "num_blocks": 2097152, 00:22:42.099 "uuid": "d44274ef-7da5-41c2-b40d-1aebe3c6630b", 00:22:42.099 "assigned_rate_limits": { 00:22:42.099 "rw_ios_per_sec": 0, 00:22:42.100 "rw_mbytes_per_sec": 0, 00:22:42.100 "r_mbytes_per_sec": 0, 00:22:42.100 "w_mbytes_per_sec": 0 00:22:42.100 }, 00:22:42.100 "claimed": false, 00:22:42.100 "zoned": false, 00:22:42.100 "supported_io_types": { 00:22:42.100 "read": true, 00:22:42.100 "write": true, 00:22:42.100 "unmap": false, 00:22:42.100 "write_zeroes": true, 00:22:42.100 "flush": true, 00:22:42.100 "reset": true, 00:22:42.100 "compare": true, 00:22:42.100 "compare_and_write": true, 00:22:42.100 "abort": true, 00:22:42.100 "nvme_admin": true, 00:22:42.100 "nvme_io": true 00:22:42.100 }, 00:22:42.100 "memory_domains": [ 00:22:42.100 { 00:22:42.100 "dma_device_id": "system", 00:22:42.100 "dma_device_type": 1 00:22:42.100 } 00:22:42.100 ], 00:22:42.100 "driver_specific": { 00:22:42.100 "nvme": [ 00:22:42.100 { 00:22:42.100 "trid": { 00:22:42.100 "trtype": "TCP", 00:22:42.100 "adrfam": "IPv4", 00:22:42.100 "traddr": "10.0.0.2", 00:22:42.100 "trsvcid": "4421", 00:22:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.100 }, 00:22:42.100 "ctrlr_data": { 00:22:42.100 "cntlid": 3, 00:22:42.100 "vendor_id": "0x8086", 00:22:42.100 "model_number": "SPDK bdev Controller", 00:22:42.100 "serial_number": "00000000000000000000", 00:22:42.100 "firmware_revision": "24.05", 00:22:42.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.100 "oacs": { 00:22:42.100 "security": 0, 00:22:42.100 "format": 0, 00:22:42.100 "firmware": 0, 00:22:42.100 "ns_manage": 0 00:22:42.100 }, 00:22:42.100 "multi_ctrlr": true, 00:22:42.100 "ana_reporting": false 00:22:42.100 }, 00:22:42.100 "vs": { 00:22:42.100 "nvme_version": "1.3" 00:22:42.100 }, 00:22:42.100 "ns_data": { 00:22:42.100 "id": 1, 00:22:42.100 "can_share": true 00:22:42.100 } 00:22:42.100 } 00:22:42.100 ], 00:22:42.100 "mp_policy": "active_passive" 00:22:42.100 } 00:22:42.100 } 00:22:42.100 ] 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5Iqzli2MUM 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.100 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.399 rmmod nvme_tcp 00:22:42.399 rmmod nvme_fabrics 00:22:42.399 rmmod nvme_keyring 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3544527 ']' 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3544527 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3544527 ']' 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3544527 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3544527 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3544527' 00:22:42.399 killing process with pid 3544527 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3544527 00:22:42.399 [2024-05-15 00:59:29.278672] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.399 [2024-05-15 00:59:29.278713] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:42.399 [2024-05-15 00:59:29.278725] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:42.399 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3544527 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.996 00:59:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.907 00:59:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.907 00:22:44.907 real 0m10.550s 00:22:44.907 user 0m3.674s 00:22:44.907 sys 0m5.205s 00:22:44.907 00:59:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:44.907 00:59:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.907 ************************************ 00:22:44.907 END TEST nvmf_async_init 00:22:44.907 ************************************ 00:22:44.907 00:59:31 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:44.907 00:59:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:44.907 00:59:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:44.907 00:59:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.907 ************************************ 00:22:44.907 START TEST dma 00:22:44.907 ************************************ 00:22:44.907 00:59:31 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:44.907 * Looking for test storage... 00:22:44.907 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:44.907 00:59:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.907 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:44.907 00:59:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.907 00:59:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.907 00:59:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.907 00:59:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.907 00:59:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.908 00:59:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.908 00:59:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:44.908 00:59:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.908 00:59:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.908 00:59:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:44.908 00:59:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:44.908 00:22:44.908 real 0m0.087s 00:22:44.908 user 0m0.038s 00:22:44.908 sys 0m0.056s 00:22:44.908 00:59:31 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:44.908 00:59:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:44.908 ************************************ 00:22:44.908 END TEST dma 00:22:44.908 ************************************ 00:22:45.169 00:59:31 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.169 00:59:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:45.169 00:59:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:45.169 00:59:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.169 ************************************ 00:22:45.169 START TEST nvmf_identify 00:22:45.169 ************************************ 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.169 * Looking for test storage... 00:22:45.169 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.169 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.170 00:59:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:50.446 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:50.446 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:50.446 Found net devices under 0000:27:00.0: cvl_0_0 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.446 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:50.447 Found net devices under 0000:27:00.1: cvl_0_1 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.447 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:22:50.707 00:22:50.707 --- 10.0.0.2 ping statistics --- 00:22:50.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.707 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:50.707 00:22:50.707 --- 10.0.0.1 ping statistics --- 00:22:50.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.707 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3549011 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3549011 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3549011 ']' 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.707 00:59:37 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:50.966 [2024-05-15 00:59:37.804683] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:50.966 [2024-05-15 00:59:37.804798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.966 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.966 [2024-05-15 00:59:37.931744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.225 [2024-05-15 00:59:38.028953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.225 [2024-05-15 00:59:38.028997] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.225 [2024-05-15 00:59:38.029006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.225 [2024-05-15 00:59:38.029016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.225 [2024-05-15 00:59:38.029024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.225 [2024-05-15 00:59:38.029122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.225 [2024-05-15 00:59:38.029228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.225 [2024-05-15 00:59:38.029337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.225 [2024-05-15 00:59:38.029347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.483 [2024-05-15 00:59:38.520703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.483 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 Malloc0 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 [2024-05-15 00:59:38.625379] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:51.743 [2024-05-15 00:59:38.625688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.743 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.743 [ 00:22:51.743 { 00:22:51.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:51.743 "subtype": "Discovery", 00:22:51.743 "listen_addresses": [ 00:22:51.743 { 00:22:51.743 "trtype": "TCP", 00:22:51.743 "adrfam": "IPv4", 00:22:51.743 "traddr": "10.0.0.2", 00:22:51.744 "trsvcid": "4420" 00:22:51.744 } 00:22:51.744 ], 00:22:51.744 "allow_any_host": true, 00:22:51.744 "hosts": [] 00:22:51.744 }, 00:22:51.744 { 00:22:51.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.744 "subtype": "NVMe", 00:22:51.744 "listen_addresses": [ 00:22:51.744 { 00:22:51.744 "trtype": "TCP", 00:22:51.744 "adrfam": "IPv4", 00:22:51.744 "traddr": "10.0.0.2", 00:22:51.744 "trsvcid": "4420" 00:22:51.744 } 00:22:51.744 ], 00:22:51.744 "allow_any_host": true, 00:22:51.744 "hosts": [], 00:22:51.744 "serial_number": "SPDK00000000000001", 00:22:51.744 "model_number": "SPDK bdev Controller", 00:22:51.744 "max_namespaces": 32, 00:22:51.744 "min_cntlid": 1, 00:22:51.744 "max_cntlid": 65519, 00:22:51.744 "namespaces": [ 00:22:51.744 { 00:22:51.744 "nsid": 1, 00:22:51.744 "bdev_name": "Malloc0", 00:22:51.744 "name": "Malloc0", 00:22:51.744 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:51.744 "eui64": "ABCDEF0123456789", 00:22:51.744 "uuid": "8f43c1ec-6e0d-434f-980a-156b7a64f710" 00:22:51.744 } 00:22:51.744 ] 00:22:51.744 } 00:22:51.744 ] 00:22:51.744 00:59:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.744 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:51.744 [2024-05-15 00:59:38.694776] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:51.744 [2024-05-15 00:59:38.694872] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549072 ] 00:22:51.744 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.744 [2024-05-15 00:59:38.752217] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:51.744 [2024-05-15 00:59:38.752315] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:51.744 [2024-05-15 00:59:38.752324] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:51.744 [2024-05-15 00:59:38.752346] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:51.744 [2024-05-15 00:59:38.752363] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:51.744 [2024-05-15 00:59:38.752873] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:51.744 [2024-05-15 00:59:38.752918] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:22:51.744 [2024-05-15 00:59:38.767059] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:51.744 [2024-05-15 00:59:38.767083] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:51.744 [2024-05-15 00:59:38.767090] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:51.744 [2024-05-15 00:59:38.767096] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:51.744 [2024-05-15 00:59:38.767153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.767165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.767172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.767202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:51.744 [2024-05-15 00:59:38.767225] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.775062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.775079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.775086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.775113] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:51.744 [2024-05-15 00:59:38.775127] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:51.744 [2024-05-15 00:59:38.775137] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:51.744 [2024-05-15 00:59:38.775159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.775187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-05-15 00:59:38.775206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.775362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.775372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.775384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.775402] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:51.744 [2024-05-15 00:59:38.775414] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:51.744 [2024-05-15 00:59:38.775422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.775448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-05-15 00:59:38.775460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.775561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.775570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.775577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.775589] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:51.744 [2024-05-15 00:59:38.775599] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.775608] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775616] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.775630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-05-15 00:59:38.775641] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.775735] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.775743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.775747] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.775758] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.775770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775781] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.775790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-05-15 00:59:38.775804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.775898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.775909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.775913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.775917] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.775924] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:51.744 [2024-05-15 00:59:38.775931] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.775942] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.776051] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:51.744 [2024-05-15 00:59:38.776058] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.776073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.776078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.776083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.744 [2024-05-15 00:59:38.776092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-05-15 00:59:38.776109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.744 [2024-05-15 00:59:38.776206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.744 [2024-05-15 00:59:38.776212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.744 [2024-05-15 00:59:38.776216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.776220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.744 [2024-05-15 00:59:38.776226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:51.744 [2024-05-15 00:59:38.776237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.776242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.744 [2024-05-15 00:59:38.776248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.776259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-05-15 00:59:38.776270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.745 [2024-05-15 00:59:38.776371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.745 [2024-05-15 00:59:38.776378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.745 [2024-05-15 00:59:38.776382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.745 [2024-05-15 00:59:38.776392] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:51.745 [2024-05-15 00:59:38.776399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.776408] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:51.745 [2024-05-15 00:59:38.776416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.776433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.776449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-05-15 00:59:38.776460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.745 [2024-05-15 00:59:38.776617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:51.745 [2024-05-15 00:59:38.776624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:51.745 [2024-05-15 00:59:38.776628] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776635] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:22:51.745 [2024-05-15 00:59:38.776643] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:51.745 [2024-05-15 00:59:38.776649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776661] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776670] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.745 [2024-05-15 00:59:38.776686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.745 [2024-05-15 00:59:38.776690] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776694] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.745 [2024-05-15 00:59:38.776708] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:51.745 [2024-05-15 00:59:38.776718] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:51.745 [2024-05-15 00:59:38.776724] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:51.745 [2024-05-15 00:59:38.776732] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:51.745 [2024-05-15 00:59:38.776739] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:51.745 [2024-05-15 00:59:38.776745] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.776756] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.776764] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.776789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:51.745 [2024-05-15 00:59:38.776800] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.745 [2024-05-15 00:59:38.776908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.745 [2024-05-15 00:59:38.776915] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.745 [2024-05-15 00:59:38.776919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776923] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:51.745 [2024-05-15 00:59:38.776937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.776958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.745 [2024-05-15 00:59:38.776965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.776982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.745 [2024-05-15 00:59:38.776989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.776998] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.777005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.745 [2024-05-15 00:59:38.777013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777017] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777021] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.777027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.745 [2024-05-15 00:59:38.777033] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.777043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:51.745 [2024-05-15 00:59:38.777057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.777073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-05-15 00:59:38.777086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:51.745 [2024-05-15 00:59:38.777091] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:51.745 [2024-05-15 00:59:38.777096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:51.745 [2024-05-15 00:59:38.777101] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:51.745 [2024-05-15 00:59:38.777108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:51.745 [2024-05-15 00:59:38.777283] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:51.745 [2024-05-15 00:59:38.777289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:51.745 [2024-05-15 00:59:38.777293] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:51.745 [2024-05-15 00:59:38.777305] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:51.745 [2024-05-15 00:59:38.777312] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:51.745 [2024-05-15 00:59:38.777324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777330] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:51.745 [2024-05-15 00:59:38.777346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-05-15 00:59:38.777356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:51.745 [2024-05-15 00:59:38.777475] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:51.745 [2024-05-15 00:59:38.777482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:51.745 [2024-05-15 00:59:38.777489] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777496] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:51.745 [2024-05-15 00:59:38.777502] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:51.745 [2024-05-15 00:59:38.777511] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777523] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:51.745 [2024-05-15 00:59:38.777528] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.007 [2024-05-15 00:59:38.818320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.007 [2024-05-15 00:59:38.818325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.007 [2024-05-15 00:59:38.818353] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:52.007 [2024-05-15 00:59:38.818403] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.007 [2024-05-15 00:59:38.818424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.007 [2024-05-15 00:59:38.818433] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.007 [2024-05-15 00:59:38.818453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.007 [2024-05-15 00:59:38.818469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.007 [2024-05-15 00:59:38.818476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.007 [2024-05-15 00:59:38.818731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.007 [2024-05-15 00:59:38.818737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.007 [2024-05-15 00:59:38.818742] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818749] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=1024, cccid=4 00:22:52.007 [2024-05-15 00:59:38.818757] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=1024 00:22:52.007 [2024-05-15 00:59:38.818763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818772] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818777] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.007 [2024-05-15 00:59:38.818792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.007 [2024-05-15 00:59:38.818796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.818801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.007 [2024-05-15 00:59:38.863060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.007 [2024-05-15 00:59:38.863076] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.007 [2024-05-15 00:59:38.863082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.863088] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.007 [2024-05-15 00:59:38.863112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.007 [2024-05-15 00:59:38.863119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.007 [2024-05-15 00:59:38.863130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.007 [2024-05-15 00:59:38.863151] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.007 [2024-05-15 00:59:38.863297] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.008 [2024-05-15 00:59:38.863305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.008 [2024-05-15 00:59:38.863310] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863316] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=3072, cccid=4 00:22:52.008 [2024-05-15 00:59:38.863322] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=3072 00:22:52.008 [2024-05-15 00:59:38.863327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863336] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863340] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.008 [2024-05-15 00:59:38.863369] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.008 [2024-05-15 00:59:38.863374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863379] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.008 [2024-05-15 00:59:38.863391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863400] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.008 [2024-05-15 00:59:38.863410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.008 [2024-05-15 00:59:38.863424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.008 [2024-05-15 00:59:38.863555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.008 [2024-05-15 00:59:38.863565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.008 [2024-05-15 00:59:38.863570] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863574] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8, cccid=4 00:22:52.008 [2024-05-15 00:59:38.863585] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=8 00:22:52.008 [2024-05-15 00:59:38.863591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863599] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.863603] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.904322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.008 [2024-05-15 00:59:38.904339] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.008 [2024-05-15 00:59:38.904344] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.008 [2024-05-15 00:59:38.904350] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.008 ===================================================== 00:22:52.008 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.008 ===================================================== 00:22:52.008 Controller Capabilities/Features 00:22:52.008 ================================ 00:22:52.008 Vendor ID: 0000 00:22:52.008 Subsystem Vendor ID: 0000 00:22:52.008 Serial Number: .................... 00:22:52.008 Model Number: ........................................ 00:22:52.008 Firmware Version: 24.05 00:22:52.008 Recommended Arb Burst: 0 00:22:52.008 IEEE OUI Identifier: 00 00 00 00:22:52.008 Multi-path I/O 00:22:52.008 May have multiple subsystem ports: No 00:22:52.008 May have multiple controllers: No 00:22:52.008 Associated with SR-IOV VF: No 00:22:52.008 Max Data Transfer Size: 131072 00:22:52.008 Max Number of Namespaces: 0 00:22:52.008 Max Number of I/O Queues: 1024 00:22:52.008 NVMe Specification Version (VS): 1.3 00:22:52.008 NVMe Specification Version (Identify): 1.3 00:22:52.008 Maximum Queue Entries: 128 00:22:52.008 Contiguous Queues Required: Yes 00:22:52.008 Arbitration Mechanisms Supported 00:22:52.008 Weighted Round Robin: Not Supported 00:22:52.008 Vendor Specific: Not Supported 00:22:52.008 Reset Timeout: 15000 ms 00:22:52.008 Doorbell Stride: 4 bytes 00:22:52.008 NVM Subsystem Reset: Not Supported 00:22:52.008 Command Sets Supported 00:22:52.008 NVM Command Set: Supported 00:22:52.008 Boot Partition: Not Supported 00:22:52.008 Memory Page Size Minimum: 4096 bytes 00:22:52.008 Memory Page Size Maximum: 4096 bytes 00:22:52.008 Persistent Memory Region: Not Supported 00:22:52.008 Optional Asynchronous Events Supported 00:22:52.008 Namespace Attribute Notices: Not Supported 00:22:52.008 Firmware Activation Notices: Not Supported 00:22:52.008 ANA Change Notices: Not Supported 00:22:52.008 PLE Aggregate Log Change Notices: Not Supported 00:22:52.008 LBA Status Info Alert Notices: Not Supported 00:22:52.008 EGE Aggregate Log Change Notices: Not Supported 00:22:52.008 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.008 Zone Descriptor Change Notices: Not Supported 00:22:52.008 Discovery Log Change Notices: Supported 00:22:52.008 Controller Attributes 00:22:52.008 128-bit Host Identifier: Not Supported 00:22:52.008 Non-Operational Permissive Mode: Not Supported 00:22:52.008 NVM Sets: Not Supported 00:22:52.008 Read Recovery Levels: Not Supported 00:22:52.008 Endurance Groups: Not Supported 00:22:52.008 Predictable Latency Mode: Not Supported 00:22:52.008 Traffic Based Keep ALive: Not Supported 00:22:52.008 Namespace Granularity: Not Supported 00:22:52.008 SQ Associations: Not Supported 00:22:52.008 UUID List: Not Supported 00:22:52.008 Multi-Domain Subsystem: Not Supported 00:22:52.008 Fixed Capacity Management: Not Supported 00:22:52.008 Variable Capacity Management: Not Supported 00:22:52.008 Delete Endurance Group: Not Supported 00:22:52.008 Delete NVM Set: Not Supported 00:22:52.008 Extended LBA Formats Supported: Not Supported 00:22:52.008 Flexible Data Placement Supported: Not Supported 00:22:52.008 00:22:52.008 Controller Memory Buffer Support 00:22:52.008 ================================ 00:22:52.008 Supported: No 00:22:52.008 00:22:52.008 Persistent Memory Region Support 00:22:52.008 ================================ 00:22:52.008 Supported: No 00:22:52.008 00:22:52.008 Admin Command Set Attributes 00:22:52.008 ============================ 00:22:52.008 Security Send/Receive: Not Supported 00:22:52.008 Format NVM: Not Supported 00:22:52.008 Firmware Activate/Download: Not Supported 00:22:52.008 Namespace Management: Not Supported 00:22:52.008 Device Self-Test: Not Supported 00:22:52.008 Directives: Not Supported 00:22:52.008 NVMe-MI: Not Supported 00:22:52.008 Virtualization Management: Not Supported 00:22:52.008 Doorbell Buffer Config: Not Supported 00:22:52.008 Get LBA Status Capability: Not Supported 00:22:52.008 Command & Feature Lockdown Capability: Not Supported 00:22:52.008 Abort Command Limit: 1 00:22:52.008 Async Event Request Limit: 4 00:22:52.008 Number of Firmware Slots: N/A 00:22:52.008 Firmware Slot 1 Read-Only: N/A 00:22:52.008 Firmware Activation Without Reset: N/A 00:22:52.008 Multiple Update Detection Support: N/A 00:22:52.008 Firmware Update Granularity: No Information Provided 00:22:52.008 Per-Namespace SMART Log: No 00:22:52.008 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.008 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.008 Command Effects Log Page: Not Supported 00:22:52.008 Get Log Page Extended Data: Supported 00:22:52.008 Telemetry Log Pages: Not Supported 00:22:52.008 Persistent Event Log Pages: Not Supported 00:22:52.008 Supported Log Pages Log Page: May Support 00:22:52.008 Commands Supported & Effects Log Page: Not Supported 00:22:52.008 Feature Identifiers & Effects Log Page:May Support 00:22:52.008 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.008 Data Area 4 for Telemetry Log: Not Supported 00:22:52.008 Error Log Page Entries Supported: 128 00:22:52.008 Keep Alive: Not Supported 00:22:52.008 00:22:52.008 NVM Command Set Attributes 00:22:52.008 ========================== 00:22:52.008 Submission Queue Entry Size 00:22:52.008 Max: 1 00:22:52.008 Min: 1 00:22:52.008 Completion Queue Entry Size 00:22:52.008 Max: 1 00:22:52.008 Min: 1 00:22:52.008 Number of Namespaces: 0 00:22:52.008 Compare Command: Not Supported 00:22:52.008 Write Uncorrectable Command: Not Supported 00:22:52.008 Dataset Management Command: Not Supported 00:22:52.008 Write Zeroes Command: Not Supported 00:22:52.008 Set Features Save Field: Not Supported 00:22:52.008 Reservations: Not Supported 00:22:52.008 Timestamp: Not Supported 00:22:52.008 Copy: Not Supported 00:22:52.008 Volatile Write Cache: Not Present 00:22:52.008 Atomic Write Unit (Normal): 1 00:22:52.008 Atomic Write Unit (PFail): 1 00:22:52.008 Atomic Compare & Write Unit: 1 00:22:52.008 Fused Compare & Write: Supported 00:22:52.008 Scatter-Gather List 00:22:52.008 SGL Command Set: Supported 00:22:52.008 SGL Keyed: Supported 00:22:52.008 SGL Bit Bucket Descriptor: Not Supported 00:22:52.008 SGL Metadata Pointer: Not Supported 00:22:52.008 Oversized SGL: Not Supported 00:22:52.008 SGL Metadata Address: Not Supported 00:22:52.008 SGL Offset: Supported 00:22:52.008 Transport SGL Data Block: Not Supported 00:22:52.008 Replay Protected Memory Block: Not Supported 00:22:52.008 00:22:52.008 Firmware Slot Information 00:22:52.008 ========================= 00:22:52.008 Active slot: 0 00:22:52.008 00:22:52.008 00:22:52.008 Error Log 00:22:52.008 ========= 00:22:52.008 00:22:52.008 Active Namespaces 00:22:52.008 ================= 00:22:52.008 Discovery Log Page 00:22:52.008 ================== 00:22:52.008 Generation Counter: 2 00:22:52.008 Number of Records: 2 00:22:52.009 Record Format: 0 00:22:52.009 00:22:52.009 Discovery Log Entry 0 00:22:52.009 ---------------------- 00:22:52.009 Transport Type: 3 (TCP) 00:22:52.009 Address Family: 1 (IPv4) 00:22:52.009 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.009 Entry Flags: 00:22:52.009 Duplicate Returned Information: 1 00:22:52.009 Explicit Persistent Connection Support for Discovery: 1 00:22:52.009 Transport Requirements: 00:22:52.009 Secure Channel: Not Required 00:22:52.009 Port ID: 0 (0x0000) 00:22:52.009 Controller ID: 65535 (0xffff) 00:22:52.009 Admin Max SQ Size: 128 00:22:52.009 Transport Service Identifier: 4420 00:22:52.009 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.009 Transport Address: 10.0.0.2 00:22:52.009 Discovery Log Entry 1 00:22:52.009 ---------------------- 00:22:52.009 Transport Type: 3 (TCP) 00:22:52.009 Address Family: 1 (IPv4) 00:22:52.009 Subsystem Type: 2 (NVM Subsystem) 00:22:52.009 Entry Flags: 00:22:52.009 Duplicate Returned Information: 0 00:22:52.009 Explicit Persistent Connection Support for Discovery: 0 00:22:52.009 Transport Requirements: 00:22:52.009 Secure Channel: Not Required 00:22:52.009 Port ID: 0 (0x0000) 00:22:52.009 Controller ID: 65535 (0xffff) 00:22:52.009 Admin Max SQ Size: 128 00:22:52.009 Transport Service Identifier: 4420 00:22:52.009 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:52.009 Transport Address: 10.0.0.2 [2024-05-15 00:59:38.904480] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:52.009 [2024-05-15 00:59:38.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.009 [2024-05-15 00:59:38.904508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.009 [2024-05-15 00:59:38.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.009 [2024-05-15 00:59:38.904522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.009 [2024-05-15 00:59:38.904535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.904558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.904577] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.904683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.904690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.904695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.904716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904727] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.904738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.904753] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.904878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.904884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.904888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.904899] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:52.009 [2024-05-15 00:59:38.904906] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:52.009 [2024-05-15 00:59:38.904917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904922] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.904928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.904937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.904948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905062] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905070] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905085] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905107] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905375] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905393] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905514] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905532] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905668] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905674] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905678] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905682] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905692] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.009 [2024-05-15 00:59:38.905827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.009 [2024-05-15 00:59:38.905831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.009 [2024-05-15 00:59:38.905845] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905849] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.009 [2024-05-15 00:59:38.905854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.009 [2024-05-15 00:59:38.905862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.009 [2024-05-15 00:59:38.905872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.009 [2024-05-15 00:59:38.905964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.905970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.905974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.905978] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.905988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.905992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.905996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906124] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906128] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906141] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906145] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906282] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906433] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906447] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906473] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906582] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906587] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906600] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906732] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906745] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906767] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.906861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.906868] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.906872] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906876] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.906885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.906894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.906902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.906911] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.907009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.907017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.907021] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.907025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.907034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.907039] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.911054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:38.911066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.010 [2024-05-15 00:59:38.911076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.010 [2024-05-15 00:59:38.911180] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.010 [2024-05-15 00:59:38.911186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.010 [2024-05-15 00:59:38.911190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:38.911194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.010 [2024-05-15 00:59:38.911203] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:52.010 00:22:52.010 00:59:38 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:52.010 [2024-05-15 00:59:38.990855] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:52.010 [2024-05-15 00:59:38.990954] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549131 ] 00:22:52.010 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.010 [2024-05-15 00:59:39.047097] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:52.010 [2024-05-15 00:59:39.047184] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.010 [2024-05-15 00:59:39.047194] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.010 [2024-05-15 00:59:39.047214] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.010 [2024-05-15 00:59:39.047229] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.010 [2024-05-15 00:59:39.047667] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:52.010 [2024-05-15 00:59:39.047697] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:22:52.010 [2024-05-15 00:59:39.062065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.010 [2024-05-15 00:59:39.062083] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.010 [2024-05-15 00:59:39.062091] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.010 [2024-05-15 00:59:39.062096] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.010 [2024-05-15 00:59:39.062143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:39.062152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.010 [2024-05-15 00:59:39.062159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.010 [2024-05-15 00:59:39.062186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.010 [2024-05-15 00:59:39.062208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.273 [2024-05-15 00:59:39.070071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.273 [2024-05-15 00:59:39.070088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.273 [2024-05-15 00:59:39.070093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.273 [2024-05-15 00:59:39.070099] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.273 [2024-05-15 00:59:39.070114] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.274 [2024-05-15 00:59:39.070127] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:52.274 [2024-05-15 00:59:39.070135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:52.274 [2024-05-15 00:59:39.070152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070165] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.070183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.070202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.070297] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.070306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.070317] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070323] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.070333] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:52.274 [2024-05-15 00:59:39.070344] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:52.274 [2024-05-15 00:59:39.070352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.070377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.070389] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.070466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.070474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.070478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.070489] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:52.274 [2024-05-15 00:59:39.070498] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.070507] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070512] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070518] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.070530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.070543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.070614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.070620] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.070624] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.070636] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.070646] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.070669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.070680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.070759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.070768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.070775] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070782] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.070790] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:52.274 [2024-05-15 00:59:39.070799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.070810] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.070918] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:52.274 [2024-05-15 00:59:39.070927] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.070939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070946] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.070952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.070961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.070972] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.071054] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.071063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.071067] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.071079] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.274 [2024-05-15 00:59:39.071091] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071097] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071103] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.071114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.071129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.071196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.071208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.071214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.071228] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.274 [2024-05-15 00:59:39.071234] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:52.274 [2024-05-15 00:59:39.071242] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:52.274 [2024-05-15 00:59:39.071251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.274 [2024-05-15 00:59:39.071265] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.274 [2024-05-15 00:59:39.071281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.274 [2024-05-15 00:59:39.071292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.274 [2024-05-15 00:59:39.071429] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.274 [2024-05-15 00:59:39.071436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.274 [2024-05-15 00:59:39.071440] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071445] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:22:52.274 [2024-05-15 00:59:39.071455] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:52.274 [2024-05-15 00:59:39.071463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071475] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071482] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.274 [2024-05-15 00:59:39.071500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.274 [2024-05-15 00:59:39.071504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.274 [2024-05-15 00:59:39.071508] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.274 [2024-05-15 00:59:39.071521] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:52.274 [2024-05-15 00:59:39.071528] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:52.274 [2024-05-15 00:59:39.071534] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:52.274 [2024-05-15 00:59:39.071541] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:52.275 [2024-05-15 00:59:39.071548] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:52.275 [2024-05-15 00:59:39.071554] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.071571] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.071583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.275 [2024-05-15 00:59:39.071619] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.275 [2024-05-15 00:59:39.071697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.275 [2024-05-15 00:59:39.071705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.275 [2024-05-15 00:59:39.071711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:22:52.275 [2024-05-15 00:59:39.071726] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071738] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.275 [2024-05-15 00:59:39.071760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071769] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.275 [2024-05-15 00:59:39.071782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.275 [2024-05-15 00:59:39.071813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.275 [2024-05-15 00:59:39.071838] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.071849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.071857] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.071862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.071871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.275 [2024-05-15 00:59:39.071886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.275 [2024-05-15 00:59:39.071893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:52.275 [2024-05-15 00:59:39.071898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:52.275 [2024-05-15 00:59:39.071903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.275 [2024-05-15 00:59:39.071908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.275 [2024-05-15 00:59:39.072014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.275 [2024-05-15 00:59:39.072021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.275 [2024-05-15 00:59:39.072027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.275 [2024-05-15 00:59:39.072042] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:52.275 [2024-05-15 00:59:39.072057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072067] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072076] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072088] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.072111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.275 [2024-05-15 00:59:39.072121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.275 [2024-05-15 00:59:39.072200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.275 [2024-05-15 00:59:39.072207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.275 [2024-05-15 00:59:39.072212] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.275 [2024-05-15 00:59:39.072272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072284] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.072310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.275 [2024-05-15 00:59:39.072322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.275 [2024-05-15 00:59:39.072403] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.275 [2024-05-15 00:59:39.072410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.275 [2024-05-15 00:59:39.072414] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072419] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:52.275 [2024-05-15 00:59:39.072424] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:52.275 [2024-05-15 00:59:39.072429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072468] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072473] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.275 [2024-05-15 00:59:39.072533] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.275 [2024-05-15 00:59:39.072537] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072542] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.275 [2024-05-15 00:59:39.072559] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:52.275 [2024-05-15 00:59:39.072573] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072583] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:52.275 [2024-05-15 00:59:39.072592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.275 [2024-05-15 00:59:39.072606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.275 [2024-05-15 00:59:39.072616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.275 [2024-05-15 00:59:39.072706] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.275 [2024-05-15 00:59:39.072715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.275 [2024-05-15 00:59:39.072719] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072723] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:52.275 [2024-05-15 00:59:39.072728] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:52.275 [2024-05-15 00:59:39.072733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072774] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072778] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.275 [2024-05-15 00:59:39.072837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.275 [2024-05-15 00:59:39.072843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.275 [2024-05-15 00:59:39.072847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.072851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.072869] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.072911] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.072920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.072928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.072937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.072949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.276 [2024-05-15 00:59:39.073037] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.276 [2024-05-15 00:59:39.073052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.276 [2024-05-15 00:59:39.073058] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073062] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:22:52.276 [2024-05-15 00:59:39.073067] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:52.276 [2024-05-15 00:59:39.073072] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073102] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073105] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073189] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073200] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073217] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073231] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:52.276 [2024-05-15 00:59:39.073237] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:52.276 [2024-05-15 00:59:39.073245] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:52.276 [2024-05-15 00:59:39.073272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073299] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.276 [2024-05-15 00:59:39.073330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.276 [2024-05-15 00:59:39.073336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.276 [2024-05-15 00:59:39.073422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073432] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073471] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.276 [2024-05-15 00:59:39.073580] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073604] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073609] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.276 [2024-05-15 00:59:39.073703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.276 [2024-05-15 00:59:39.073830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.276 [2024-05-15 00:59:39.073837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.276 [2024-05-15 00:59:39.073841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.276 [2024-05-15 00:59:39.073861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073866] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073885] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073919] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.073946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:22:52.276 [2024-05-15 00:59:39.073955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.276 [2024-05-15 00:59:39.073967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:52.276 [2024-05-15 00:59:39.073975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:52.276 [2024-05-15 00:59:39.073980] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:22:52.276 [2024-05-15 00:59:39.073989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:52.276 [2024-05-15 00:59:39.078065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.276 [2024-05-15 00:59:39.078077] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.276 [2024-05-15 00:59:39.078081] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.078086] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8192, cccid=5 00:22:52.276 [2024-05-15 00:59:39.078094] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x615000024980): expected_datao=0, payload_size=8192 00:22:52.276 [2024-05-15 00:59:39.078101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.078109] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.078115] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.078122] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.276 [2024-05-15 00:59:39.078129] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.276 [2024-05-15 00:59:39.078135] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.276 [2024-05-15 00:59:39.078142] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=4 00:22:52.277 [2024-05-15 00:59:39.078147] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:22:52.277 [2024-05-15 00:59:39.078152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078159] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078166] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.277 [2024-05-15 00:59:39.078183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.277 [2024-05-15 00:59:39.078187] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078191] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=6 00:22:52.277 [2024-05-15 00:59:39.078196] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:22:52.277 [2024-05-15 00:59:39.078200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078213] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078218] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.277 [2024-05-15 00:59:39.078230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.277 [2024-05-15 00:59:39.078234] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078241] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=7 00:22:52.277 [2024-05-15 00:59:39.078249] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:22:52.277 [2024-05-15 00:59:39.078255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078262] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078266] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078273] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.277 [2024-05-15 00:59:39.078282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.277 [2024-05-15 00:59:39.078287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:22:52.277 [2024-05-15 00:59:39.078312] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.277 [2024-05-15 00:59:39.078320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.277 [2024-05-15 00:59:39.078323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078327] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:22:52.277 [2024-05-15 00:59:39.078337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.277 [2024-05-15 00:59:39.078346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.277 [2024-05-15 00:59:39.078351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x615000024980 00:22:52.277 [2024-05-15 00:59:39.078369] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.277 [2024-05-15 00:59:39.078377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.277 [2024-05-15 00:59:39.078381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.277 [2024-05-15 00:59:39.078386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:22:52.277 ===================================================== 00:22:52.277 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.277 ===================================================== 00:22:52.277 Controller Capabilities/Features 00:22:52.277 ================================ 00:22:52.277 Vendor ID: 8086 00:22:52.277 Subsystem Vendor ID: 8086 00:22:52.277 Serial Number: SPDK00000000000001 00:22:52.277 Model Number: SPDK bdev Controller 00:22:52.277 Firmware Version: 24.05 00:22:52.277 Recommended Arb Burst: 6 00:22:52.277 IEEE OUI Identifier: e4 d2 5c 00:22:52.277 Multi-path I/O 00:22:52.277 May have multiple subsystem ports: Yes 00:22:52.277 May have multiple controllers: Yes 00:22:52.277 Associated with SR-IOV VF: No 00:22:52.277 Max Data Transfer Size: 131072 00:22:52.277 Max Number of Namespaces: 32 00:22:52.277 Max Number of I/O Queues: 127 00:22:52.277 NVMe Specification Version (VS): 1.3 00:22:52.277 NVMe Specification Version (Identify): 1.3 00:22:52.277 Maximum Queue Entries: 128 00:22:52.277 Contiguous Queues Required: Yes 00:22:52.277 Arbitration Mechanisms Supported 00:22:52.277 Weighted Round Robin: Not Supported 00:22:52.277 Vendor Specific: Not Supported 00:22:52.277 Reset Timeout: 15000 ms 00:22:52.277 Doorbell Stride: 4 bytes 00:22:52.277 NVM Subsystem Reset: Not Supported 00:22:52.277 Command Sets Supported 00:22:52.277 NVM Command Set: Supported 00:22:52.277 Boot Partition: Not Supported 00:22:52.277 Memory Page Size Minimum: 4096 bytes 00:22:52.277 Memory Page Size Maximum: 4096 bytes 00:22:52.277 Persistent Memory Region: Not Supported 00:22:52.277 Optional Asynchronous Events Supported 00:22:52.277 Namespace Attribute Notices: Supported 00:22:52.277 Firmware Activation Notices: Not Supported 00:22:52.277 ANA Change Notices: Not Supported 00:22:52.277 PLE Aggregate Log Change Notices: Not Supported 00:22:52.277 LBA Status Info Alert Notices: Not Supported 00:22:52.277 EGE Aggregate Log Change Notices: Not Supported 00:22:52.277 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.277 Zone Descriptor Change Notices: Not Supported 00:22:52.277 Discovery Log Change Notices: Not Supported 00:22:52.277 Controller Attributes 00:22:52.277 128-bit Host Identifier: Supported 00:22:52.277 Non-Operational Permissive Mode: Not Supported 00:22:52.277 NVM Sets: Not Supported 00:22:52.277 Read Recovery Levels: Not Supported 00:22:52.277 Endurance Groups: Not Supported 00:22:52.277 Predictable Latency Mode: Not Supported 00:22:52.277 Traffic Based Keep ALive: Not Supported 00:22:52.277 Namespace Granularity: Not Supported 00:22:52.277 SQ Associations: Not Supported 00:22:52.277 UUID List: Not Supported 00:22:52.277 Multi-Domain Subsystem: Not Supported 00:22:52.277 Fixed Capacity Management: Not Supported 00:22:52.277 Variable Capacity Management: Not Supported 00:22:52.277 Delete Endurance Group: Not Supported 00:22:52.277 Delete NVM Set: Not Supported 00:22:52.277 Extended LBA Formats Supported: Not Supported 00:22:52.277 Flexible Data Placement Supported: Not Supported 00:22:52.277 00:22:52.277 Controller Memory Buffer Support 00:22:52.277 ================================ 00:22:52.277 Supported: No 00:22:52.277 00:22:52.277 Persistent Memory Region Support 00:22:52.277 ================================ 00:22:52.277 Supported: No 00:22:52.277 00:22:52.277 Admin Command Set Attributes 00:22:52.277 ============================ 00:22:52.277 Security Send/Receive: Not Supported 00:22:52.277 Format NVM: Not Supported 00:22:52.277 Firmware Activate/Download: Not Supported 00:22:52.277 Namespace Management: Not Supported 00:22:52.277 Device Self-Test: Not Supported 00:22:52.277 Directives: Not Supported 00:22:52.277 NVMe-MI: Not Supported 00:22:52.277 Virtualization Management: Not Supported 00:22:52.277 Doorbell Buffer Config: Not Supported 00:22:52.277 Get LBA Status Capability: Not Supported 00:22:52.277 Command & Feature Lockdown Capability: Not Supported 00:22:52.277 Abort Command Limit: 4 00:22:52.277 Async Event Request Limit: 4 00:22:52.277 Number of Firmware Slots: N/A 00:22:52.277 Firmware Slot 1 Read-Only: N/A 00:22:52.277 Firmware Activation Without Reset: N/A 00:22:52.277 Multiple Update Detection Support: N/A 00:22:52.277 Firmware Update Granularity: No Information Provided 00:22:52.277 Per-Namespace SMART Log: No 00:22:52.277 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.277 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:52.277 Command Effects Log Page: Supported 00:22:52.277 Get Log Page Extended Data: Supported 00:22:52.277 Telemetry Log Pages: Not Supported 00:22:52.277 Persistent Event Log Pages: Not Supported 00:22:52.277 Supported Log Pages Log Page: May Support 00:22:52.277 Commands Supported & Effects Log Page: Not Supported 00:22:52.277 Feature Identifiers & Effects Log Page:May Support 00:22:52.277 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.277 Data Area 4 for Telemetry Log: Not Supported 00:22:52.278 Error Log Page Entries Supported: 128 00:22:52.278 Keep Alive: Supported 00:22:52.278 Keep Alive Granularity: 10000 ms 00:22:52.278 00:22:52.278 NVM Command Set Attributes 00:22:52.278 ========================== 00:22:52.278 Submission Queue Entry Size 00:22:52.278 Max: 64 00:22:52.278 Min: 64 00:22:52.278 Completion Queue Entry Size 00:22:52.278 Max: 16 00:22:52.278 Min: 16 00:22:52.278 Number of Namespaces: 32 00:22:52.278 Compare Command: Supported 00:22:52.278 Write Uncorrectable Command: Not Supported 00:22:52.278 Dataset Management Command: Supported 00:22:52.278 Write Zeroes Command: Supported 00:22:52.278 Set Features Save Field: Not Supported 00:22:52.278 Reservations: Supported 00:22:52.278 Timestamp: Not Supported 00:22:52.278 Copy: Supported 00:22:52.278 Volatile Write Cache: Present 00:22:52.278 Atomic Write Unit (Normal): 1 00:22:52.278 Atomic Write Unit (PFail): 1 00:22:52.278 Atomic Compare & Write Unit: 1 00:22:52.278 Fused Compare & Write: Supported 00:22:52.278 Scatter-Gather List 00:22:52.278 SGL Command Set: Supported 00:22:52.278 SGL Keyed: Supported 00:22:52.278 SGL Bit Bucket Descriptor: Not Supported 00:22:52.278 SGL Metadata Pointer: Not Supported 00:22:52.278 Oversized SGL: Not Supported 00:22:52.278 SGL Metadata Address: Not Supported 00:22:52.278 SGL Offset: Supported 00:22:52.278 Transport SGL Data Block: Not Supported 00:22:52.278 Replay Protected Memory Block: Not Supported 00:22:52.278 00:22:52.278 Firmware Slot Information 00:22:52.278 ========================= 00:22:52.278 Active slot: 1 00:22:52.278 Slot 1 Firmware Revision: 24.05 00:22:52.278 00:22:52.278 00:22:52.278 Commands Supported and Effects 00:22:52.278 ============================== 00:22:52.278 Admin Commands 00:22:52.278 -------------- 00:22:52.278 Get Log Page (02h): Supported 00:22:52.278 Identify (06h): Supported 00:22:52.278 Abort (08h): Supported 00:22:52.278 Set Features (09h): Supported 00:22:52.278 Get Features (0Ah): Supported 00:22:52.278 Asynchronous Event Request (0Ch): Supported 00:22:52.278 Keep Alive (18h): Supported 00:22:52.278 I/O Commands 00:22:52.278 ------------ 00:22:52.278 Flush (00h): Supported LBA-Change 00:22:52.278 Write (01h): Supported LBA-Change 00:22:52.278 Read (02h): Supported 00:22:52.278 Compare (05h): Supported 00:22:52.278 Write Zeroes (08h): Supported LBA-Change 00:22:52.278 Dataset Management (09h): Supported LBA-Change 00:22:52.278 Copy (19h): Supported LBA-Change 00:22:52.278 Unknown (79h): Supported LBA-Change 00:22:52.278 Unknown (7Ah): Supported 00:22:52.278 00:22:52.278 Error Log 00:22:52.278 ========= 00:22:52.278 00:22:52.278 Arbitration 00:22:52.278 =========== 00:22:52.278 Arbitration Burst: 1 00:22:52.278 00:22:52.278 Power Management 00:22:52.278 ================ 00:22:52.278 Number of Power States: 1 00:22:52.278 Current Power State: Power State #0 00:22:52.278 Power State #0: 00:22:52.278 Max Power: 0.00 W 00:22:52.278 Non-Operational State: Operational 00:22:52.278 Entry Latency: Not Reported 00:22:52.278 Exit Latency: Not Reported 00:22:52.278 Relative Read Throughput: 0 00:22:52.278 Relative Read Latency: 0 00:22:52.278 Relative Write Throughput: 0 00:22:52.278 Relative Write Latency: 0 00:22:52.278 Idle Power: Not Reported 00:22:52.278 Active Power: Not Reported 00:22:52.278 Non-Operational Permissive Mode: Not Supported 00:22:52.278 00:22:52.278 Health Information 00:22:52.278 ================== 00:22:52.278 Critical Warnings: 00:22:52.278 Available Spare Space: OK 00:22:52.278 Temperature: OK 00:22:52.278 Device Reliability: OK 00:22:52.278 Read Only: No 00:22:52.278 Volatile Memory Backup: OK 00:22:52.278 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:52.278 Temperature Threshold: [2024-05-15 00:59:39.078524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:22:52.278 [2024-05-15 00:59:39.078547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.278 [2024-05-15 00:59:39.078560] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:52.278 [2024-05-15 00:59:39.078650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.278 [2024-05-15 00:59:39.078658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.278 [2024-05-15 00:59:39.078662] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:22:52.278 [2024-05-15 00:59:39.078710] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:52.278 [2024-05-15 00:59:39.078723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.278 [2024-05-15 00:59:39.078732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.278 [2024-05-15 00:59:39.078738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.278 [2024-05-15 00:59:39.078744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.278 [2024-05-15 00:59:39.078754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078761] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078771] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.278 [2024-05-15 00:59:39.078782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.278 [2024-05-15 00:59:39.078795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.278 [2024-05-15 00:59:39.078870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.278 [2024-05-15 00:59:39.078878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.278 [2024-05-15 00:59:39.078883] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078888] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.278 [2024-05-15 00:59:39.078899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078905] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.278 [2024-05-15 00:59:39.078912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.278 [2024-05-15 00:59:39.078921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.278 [2024-05-15 00:59:39.078934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.278 [2024-05-15 00:59:39.079021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.278 [2024-05-15 00:59:39.079028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079036] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079043] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:52.279 [2024-05-15 00:59:39.079058] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:52.279 [2024-05-15 00:59:39.079071] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079180] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079191] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079195] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079205] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079214] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079233] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079322] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079326] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079335] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079340] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079362] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079438] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079446] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079472] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079574] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079586] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079590] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079601] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079628] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079705] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079715] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079720] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079744] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079835] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079860] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.079876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.079886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.079958] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.079964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.079968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.079983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.079992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.080000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.080010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.080082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.080088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.080092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.080106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.080124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.080134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.080213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.080219] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.080224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.279 [2024-05-15 00:59:39.080237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.279 [2024-05-15 00:59:39.080246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.279 [2024-05-15 00:59:39.080253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.279 [2024-05-15 00:59:39.080263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.279 [2024-05-15 00:59:39.080342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.279 [2024-05-15 00:59:39.080350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.279 [2024-05-15 00:59:39.080356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080360] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.080370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080374] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080378] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.080386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.080395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.080463] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.080469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.080473] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080477] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.080486] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080494] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.080510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.080520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.080588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.080596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.080600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.080613] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.080631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.080641] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.080721] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.080730] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.080734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080738] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.080747] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080751] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080756] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.080763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.080773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.080845] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.080851] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.080856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080861] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.080871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080880] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.080889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.080899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.080978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.080985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.080989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.080993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081007] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081031] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081134] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081139] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081265] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081479] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081487] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081491] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081504] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081508] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081517] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081609] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081617] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081636] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.280 [2024-05-15 00:59:39.081653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.280 [2024-05-15 00:59:39.081662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.280 [2024-05-15 00:59:39.081738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.280 [2024-05-15 00:59:39.081745] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.280 [2024-05-15 00:59:39.081750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.280 [2024-05-15 00:59:39.081754] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.280 [2024-05-15 00:59:39.081763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.081768] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.081773] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.281 [2024-05-15 00:59:39.081782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.281 [2024-05-15 00:59:39.081791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.281 [2024-05-15 00:59:39.081864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.281 [2024-05-15 00:59:39.081870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.281 [2024-05-15 00:59:39.081875] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.081880] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.281 [2024-05-15 00:59:39.081889] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.081894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.081898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.281 [2024-05-15 00:59:39.081906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.281 [2024-05-15 00:59:39.081915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.281 [2024-05-15 00:59:39.081987] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.281 [2024-05-15 00:59:39.081994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.281 [2024-05-15 00:59:39.082001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.082006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.281 [2024-05-15 00:59:39.082016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.082021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.082025] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.281 [2024-05-15 00:59:39.082033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.281 [2024-05-15 00:59:39.082042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.281 [2024-05-15 00:59:39.086066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.281 [2024-05-15 00:59:39.086073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.281 [2024-05-15 00:59:39.086077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.086081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.281 [2024-05-15 00:59:39.086096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.086101] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.086105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:22:52.281 [2024-05-15 00:59:39.086113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.281 [2024-05-15 00:59:39.086124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:52.281 [2024-05-15 00:59:39.086204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.281 [2024-05-15 00:59:39.086210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.281 [2024-05-15 00:59:39.086215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.281 [2024-05-15 00:59:39.086219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:22:52.281 [2024-05-15 00:59:39.086227] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:52.281 0 Kelvin (-273 Celsius) 00:22:52.281 Available Spare: 0% 00:22:52.281 Available Spare Threshold: 0% 00:22:52.281 Life Percentage Used: 0% 00:22:52.281 Data Units Read: 0 00:22:52.281 Data Units Written: 0 00:22:52.281 Host Read Commands: 0 00:22:52.281 Host Write Commands: 0 00:22:52.281 Controller Busy Time: 0 minutes 00:22:52.281 Power Cycles: 0 00:22:52.281 Power On Hours: 0 hours 00:22:52.281 Unsafe Shutdowns: 0 00:22:52.281 Unrecoverable Media Errors: 0 00:22:52.281 Lifetime Error Log Entries: 0 00:22:52.281 Warning Temperature Time: 0 minutes 00:22:52.281 Critical Temperature Time: 0 minutes 00:22:52.281 00:22:52.281 Number of Queues 00:22:52.281 ================ 00:22:52.281 Number of I/O Submission Queues: 127 00:22:52.281 Number of I/O Completion Queues: 127 00:22:52.281 00:22:52.281 Active Namespaces 00:22:52.281 ================= 00:22:52.281 Namespace ID:1 00:22:52.281 Error Recovery Timeout: Unlimited 00:22:52.281 Command Set Identifier: NVM (00h) 00:22:52.281 Deallocate: Supported 00:22:52.281 Deallocated/Unwritten Error: Not Supported 00:22:52.281 Deallocated Read Value: Unknown 00:22:52.281 Deallocate in Write Zeroes: Not Supported 00:22:52.281 Deallocated Guard Field: 0xFFFF 00:22:52.281 Flush: Supported 00:22:52.281 Reservation: Supported 00:22:52.281 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.281 Size (in LBAs): 131072 (0GiB) 00:22:52.281 Capacity (in LBAs): 131072 (0GiB) 00:22:52.281 Utilization (in LBAs): 131072 (0GiB) 00:22:52.281 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:52.281 EUI64: ABCDEF0123456789 00:22:52.281 UUID: 8f43c1ec-6e0d-434f-980a-156b7a64f710 00:22:52.281 Thin Provisioning: Not Supported 00:22:52.281 Per-NS Atomic Units: Yes 00:22:52.281 Atomic Boundary Size (Normal): 0 00:22:52.281 Atomic Boundary Size (PFail): 0 00:22:52.281 Atomic Boundary Offset: 0 00:22:52.281 Maximum Single Source Range Length: 65535 00:22:52.281 Maximum Copy Length: 65535 00:22:52.281 Maximum Source Range Count: 1 00:22:52.281 NGUID/EUI64 Never Reused: No 00:22:52.281 Namespace Write Protected: No 00:22:52.281 Number of LBA Formats: 1 00:22:52.281 Current LBA Format: LBA Format #00 00:22:52.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.281 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.281 rmmod nvme_tcp 00:22:52.281 rmmod nvme_fabrics 00:22:52.281 rmmod nvme_keyring 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3549011 ']' 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3549011 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3549011 ']' 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3549011 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3549011 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3549011' 00:22:52.281 killing process with pid 3549011 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3549011 00:22:52.281 [2024-05-15 00:59:39.266392] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:52.281 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3549011 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.849 00:59:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.387 00:59:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.387 00:22:55.387 real 0m9.819s 00:22:55.387 user 0m8.027s 00:22:55.387 sys 0m4.769s 00:22:55.387 00:59:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:55.387 00:59:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 ************************************ 00:22:55.387 END TEST nvmf_identify 00:22:55.387 ************************************ 00:22:55.387 00:59:41 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.387 00:59:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:55.387 00:59:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:55.387 00:59:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 ************************************ 00:22:55.387 START TEST nvmf_perf 00:22:55.387 ************************************ 00:22:55.387 00:59:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.387 * Looking for test storage... 00:22:55.387 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.387 00:59:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:00.657 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:00.657 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:00.657 Found net devices under 0000:27:00.0: cvl_0_0 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:00.657 Found net devices under 0000:27:00.1: cvl_0_1 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.657 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.658 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:23:00.917 00:23:00.917 --- 10.0.0.2 ping statistics --- 00:23:00.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.917 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:23:00.917 00:23:00.917 --- 10.0.0.1 ping statistics --- 00:23:00.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.917 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3553238 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3553238 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3553238 ']' 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.917 00:59:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.176 [2024-05-15 00:59:48.066666] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:01.176 [2024-05-15 00:59:48.066797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.176 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.176 [2024-05-15 00:59:48.218296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.435 [2024-05-15 00:59:48.326670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.435 [2024-05-15 00:59:48.326714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.435 [2024-05-15 00:59:48.326723] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.435 [2024-05-15 00:59:48.326732] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.435 [2024-05-15 00:59:48.326740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.435 [2024-05-15 00:59:48.326834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.435 [2024-05-15 00:59:48.326841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.435 [2024-05-15 00:59:48.326945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.435 [2024-05-15 00:59:48.326956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:02.001 00:59:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:02.936 00:59:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:02.936 00:59:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:02.936 00:59:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:03:00.0 00:23:02.936 00:59:49 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:03:00.0 ']' 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.194 [2024-05-15 00:59:50.170854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.194 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.451 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.451 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.451 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.451 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:03.711 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.711 [2024-05-15 00:59:50.759598] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:03.711 [2024-05-15 00:59:50.759905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.969 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:03.969 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:03:00.0 ']' 00:23:03.969 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:23:03.969 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:03.969 00:59:50 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:23:05.347 Initializing NVMe Controllers 00:23:05.347 Attached to NVMe Controller at 0000:03:00.0 [1344:51c3] 00:23:05.347 Associating PCIE (0000:03:00.0) NSID 1 with lcore 0 00:23:05.347 Initialization complete. Launching workers. 00:23:05.347 ======================================================== 00:23:05.347 Latency(us) 00:23:05.347 Device Information : IOPS MiB/s Average min max 00:23:05.347 PCIE (0000:03:00.0) NSID 1 from core 0: 86763.51 338.92 368.49 76.48 5422.24 00:23:05.347 ======================================================== 00:23:05.347 Total : 86763.51 338.92 368.49 76.48 5422.24 00:23:05.347 00:23:05.347 00:59:52 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.347 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.718 Initializing NVMe Controllers 00:23:06.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.718 Initialization complete. Launching workers. 00:23:06.718 ======================================================== 00:23:06.718 Latency(us) 00:23:06.718 Device Information : IOPS MiB/s Average min max 00:23:06.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.68 0.42 9307.67 112.68 45898.11 00:23:06.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.80 0.26 15316.94 3992.22 52919.79 00:23:06.718 ======================================================== 00:23:06.718 Total : 173.48 0.68 11587.05 112.68 52919.79 00:23:06.718 00:23:06.974 00:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.974 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.390 Initializing NVMe Controllers 00:23:08.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.390 Initialization complete. Launching workers. 00:23:08.390 ======================================================== 00:23:08.390 Latency(us) 00:23:08.390 Device Information : IOPS MiB/s Average min max 00:23:08.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11399.27 44.53 2808.59 402.35 6703.39 00:23:08.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3950.01 15.43 8128.00 6490.55 15380.94 00:23:08.390 ======================================================== 00:23:08.390 Total : 15349.28 59.96 4177.50 402.35 15380.94 00:23:08.390 00:23:08.390 00:59:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:08.390 00:59:55 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.390 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.926 Initializing NVMe Controllers 00:23:10.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.926 Controller IO queue size 128, less than required. 00:23:10.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.926 Controller IO queue size 128, less than required. 00:23:10.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.926 Initialization complete. Launching workers. 00:23:10.926 ======================================================== 00:23:10.926 Latency(us) 00:23:10.926 Device Information : IOPS MiB/s Average min max 00:23:10.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2334.99 583.75 56142.77 32830.51 149131.81 00:23:10.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.00 144.25 226847.56 88308.88 335810.95 00:23:10.926 ======================================================== 00:23:10.926 Total : 2911.98 728.00 89967.17 32830.51 335810.95 00:23:10.926 00:23:10.926 00:59:57 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:10.926 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.183 No valid NVMe controllers or AIO or URING devices found 00:23:11.183 Initializing NVMe Controllers 00:23:11.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.183 Controller IO queue size 128, less than required. 00:23:11.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.183 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:11.183 Controller IO queue size 128, less than required. 00:23:11.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.183 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:11.183 WARNING: Some requested NVMe devices were skipped 00:23:11.183 00:59:58 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:11.183 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.466 Initializing NVMe Controllers 00:23:14.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.466 Controller IO queue size 128, less than required. 00:23:14.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.466 Controller IO queue size 128, less than required. 00:23:14.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.466 Initialization complete. Launching workers. 00:23:14.466 00:23:14.466 ==================== 00:23:14.466 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:14.466 TCP transport: 00:23:14.466 polls: 19122 00:23:14.466 idle_polls: 12070 00:23:14.466 sock_completions: 7052 00:23:14.466 nvme_completions: 8247 00:23:14.466 submitted_requests: 12432 00:23:14.466 queued_requests: 1 00:23:14.466 00:23:14.466 ==================== 00:23:14.467 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:14.467 TCP transport: 00:23:14.467 polls: 17019 00:23:14.467 idle_polls: 7978 00:23:14.467 sock_completions: 9041 00:23:14.467 nvme_completions: 8901 00:23:14.467 submitted_requests: 13312 00:23:14.467 queued_requests: 1 00:23:14.467 ======================================================== 00:23:14.467 Latency(us) 00:23:14.467 Device Information : IOPS MiB/s Average min max 00:23:14.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2058.36 514.59 63784.50 40846.29 177260.51 00:23:14.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2221.61 555.40 57993.23 32203.52 159236.25 00:23:14.467 ======================================================== 00:23:14.467 Total : 4279.97 1069.99 60778.42 32203.52 177260.51 00:23:14.467 00:23:14.467 01:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:14.467 01:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.467 rmmod nvme_tcp 00:23:14.467 rmmod nvme_fabrics 00:23:14.467 rmmod nvme_keyring 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3553238 ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3553238 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3553238 ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3553238 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3553238 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3553238' 00:23:14.467 killing process with pid 3553238 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3553238 00:23:14.467 [2024-05-15 01:00:01.288488] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:14.467 01:00:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3553238 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.846 01:00:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.756 01:00:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.756 00:23:17.756 real 0m22.847s 00:23:17.756 user 0m57.951s 00:23:17.756 sys 0m7.212s 00:23:17.756 01:00:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:17.756 01:00:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.756 ************************************ 00:23:17.756 END TEST nvmf_perf 00:23:17.756 ************************************ 00:23:17.756 01:00:04 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.756 01:00:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:17.756 01:00:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:17.756 01:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.014 ************************************ 00:23:18.014 START TEST nvmf_fio_host 00:23:18.014 ************************************ 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:18.014 * Looking for test storage... 00:23:18.014 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.014 01:00:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.015 01:00:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.287 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:23.288 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:23.288 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:23.288 Found net devices under 0000:27:00.0: cvl_0_0 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:23.288 Found net devices under 0000:27:00.1: cvl_0_1 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.288 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:23:23.548 00:23:23.548 --- 10.0.0.2 ping statistics --- 00:23:23.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.548 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:23:23.548 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:23:23.548 00:23:23.548 --- 10.0.0.1 ping statistics --- 00:23:23.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.548 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3560680 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3560680 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3560680 ']' 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.549 01:00:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:23.809 [2024-05-15 01:00:10.619145] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:23.809 [2024-05-15 01:00:10.619283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.809 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.809 [2024-05-15 01:00:10.766636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.809 [2024-05-15 01:00:10.864898] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.809 [2024-05-15 01:00:10.864945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.809 [2024-05-15 01:00:10.864956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.809 [2024-05-15 01:00:10.864966] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.809 [2024-05-15 01:00:10.864975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.809 [2024-05-15 01:00:10.865111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.809 [2024-05-15 01:00:10.865172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.809 [2024-05-15 01:00:10.865280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.809 [2024-05-15 01:00:10.865290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 [2024-05-15 01:00:11.311589] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 Malloc1 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 [2024-05-15 01:00:11.407730] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:24.378 [2024-05-15 01:00:11.408014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.378 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # break 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:24.379 01:00:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.957 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:24.957 fio-3.35 00:23:24.957 Starting 1 thread 00:23:24.957 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.484 00:23:27.484 test: (groupid=0, jobs=1): err= 0: pid=3561142: Wed May 15 01:00:14 2024 00:23:27.484 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(96.7MiB/2005msec) 00:23:27.484 slat (nsec): min=1568, max=90715, avg=1722.54, stdev=840.89 00:23:27.484 clat (usec): min=1834, max=9756, avg=5732.87, stdev=446.83 00:23:27.484 lat (usec): min=1852, max=9757, avg=5734.59, stdev=446.78 00:23:27.484 clat percentiles (usec): 00:23:27.484 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:23:27.484 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:23:27.484 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:23:27.484 | 99.00th=[ 6915], 99.50th=[ 7373], 99.90th=[ 8160], 99.95th=[ 8717], 00:23:27.484 | 99.99th=[ 9372] 00:23:27.484 bw ( KiB/s): min=48512, max=50144, per=100.00%, avg=49370.00, stdev=681.31, samples=4 00:23:27.484 iops : min=12128, max=12536, avg=12342.50, stdev=170.33, samples=4 00:23:27.484 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(96.5MiB/2005msec); 0 zone resets 00:23:27.484 slat (nsec): min=1612, max=80018, avg=1809.99, stdev=596.92 00:23:27.484 clat (usec): min=935, max=9198, avg=4615.07, stdev=369.12 00:23:27.484 lat (usec): min=942, max=9200, avg=4616.88, stdev=369.09 00:23:27.484 clat percentiles (usec): 00:23:27.484 | 1.00th=[ 3818], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:23:27.484 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:23:27.484 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:23:27.484 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8094], 00:23:27.484 | 99.99th=[ 9110] 00:23:27.484 bw ( KiB/s): min=48336, max=50096, per=99.97%, avg=49252.00, stdev=726.36, samples=4 00:23:27.484 iops : min=12084, max=12524, avg=12313.00, stdev=181.59, samples=4 00:23:27.484 lat (usec) : 1000=0.01% 00:23:27.484 lat (msec) : 2=0.04%, 4=1.59%, 10=98.36% 00:23:27.484 cpu : usr=84.73%, sys=14.92%, ctx=4, majf=0, minf=1531 00:23:27.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:27.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:27.484 issued rwts: total=24743,24695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:27.484 00:23:27.484 Run status group 0 (all jobs): 00:23:27.484 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=96.7MiB (101MB), run=2005-2005msec 00:23:27.484 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=96.5MiB (101MB), run=2005-2005msec 00:23:27.484 ----------------------------------------------------- 00:23:27.484 Suppressions used: 00:23:27.484 count bytes template 00:23:27.484 1 57 /usr/src/fio/parse.c 00:23:27.484 1 8 libtcmalloc_minimal.so 00:23:27.484 ----------------------------------------------------- 00:23:27.484 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:27.484 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:27.753 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:27.753 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:27.753 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # break 00:23:27.753 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:27.753 01:00:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.012 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:28.012 fio-3.35 00:23:28.012 Starting 1 thread 00:23:28.012 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.544 00:23:30.544 test: (groupid=0, jobs=1): err= 0: pid=3561873: Wed May 15 01:00:17 2024 00:23:30.544 read: IOPS=9511, BW=149MiB/s (156MB/s)(298MiB/2004msec) 00:23:30.544 slat (nsec): min=2579, max=96997, avg=3403.66, stdev=1487.92 00:23:30.544 clat (usec): min=1670, max=51696, avg=8065.41, stdev=3977.71 00:23:30.544 lat (usec): min=1673, max=51700, avg=8068.82, stdev=3978.06 00:23:30.544 clat percentiles (usec): 00:23:30.544 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5735], 00:23:30.544 | 30.00th=[ 6325], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8029], 00:23:30.544 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[11207], 95.00th=[12387], 00:23:30.544 | 99.00th=[15270], 99.50th=[45351], 99.90th=[49546], 99.95th=[51119], 00:23:30.544 | 99.99th=[51643] 00:23:30.544 bw ( KiB/s): min=62048, max=91840, per=49.21%, avg=74895.00, stdev=13632.24, samples=4 00:23:30.544 iops : min= 3878, max= 5740, avg=4680.75, stdev=852.18, samples=4 00:23:30.544 write: IOPS=5714, BW=89.3MiB/s (93.6MB/s)(153MiB/1714msec); 0 zone resets 00:23:30.544 slat (usec): min=28, max=198, avg=37.27, stdev=10.81 00:23:30.544 clat (usec): min=1605, max=17006, avg=9650.83, stdev=2282.65 00:23:30.544 lat (usec): min=1633, max=17034, avg=9688.10, stdev=2290.01 00:23:30.544 clat percentiles (usec): 00:23:30.544 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7701], 00:23:30.544 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:23:30.544 | 70.00th=[10814], 80.00th=[11731], 90.00th=[12911], 95.00th=[13698], 00:23:30.544 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16581], 99.95th=[16712], 00:23:30.544 | 99.99th=[16909] 00:23:30.544 bw ( KiB/s): min=63840, max=95008, per=85.50%, avg=78181.50, stdev=14370.64, samples=4 00:23:30.544 iops : min= 3990, max= 5938, avg=4886.25, stdev=898.25, samples=4 00:23:30.544 lat (msec) : 2=0.05%, 4=1.21%, 10=73.48%, 20=24.82%, 50=0.40% 00:23:30.544 lat (msec) : 100=0.05% 00:23:30.544 cpu : usr=88.12%, sys=11.43%, ctx=10, majf=0, minf=2340 00:23:30.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:23:30.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.544 issued rwts: total=19062,9795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.544 00:23:30.544 Run status group 0 (all jobs): 00:23:30.544 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (312MB), run=2004-2004msec 00:23:30.544 WRITE: bw=89.3MiB/s (93.6MB/s), 89.3MiB/s-89.3MiB/s (93.6MB/s-93.6MB/s), io=153MiB (160MB), run=1714-1714msec 00:23:30.544 ----------------------------------------------------- 00:23:30.544 Suppressions used: 00:23:30.544 count bytes template 00:23:30.544 1 57 /usr/src/fio/parse.c 00:23:30.544 114 10944 /usr/src/fio/iolog.c 00:23:30.544 1 8 libtcmalloc_minimal.so 00:23:30.544 ----------------------------------------------------- 00:23:30.544 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.544 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.803 rmmod nvme_tcp 00:23:30.803 rmmod nvme_fabrics 00:23:30.803 rmmod nvme_keyring 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3560680 ']' 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3560680 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3560680 ']' 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3560680 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3560680 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3560680' 00:23:30.803 killing process with pid 3560680 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3560680 00:23:30.803 [2024-05-15 01:00:17.709821] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:30.803 01:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3560680 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.370 01:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.280 01:00:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.280 00:23:33.280 real 0m15.464s 00:23:33.280 user 1m9.212s 00:23:33.280 sys 0m5.917s 00:23:33.280 01:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.280 01:00:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.280 ************************************ 00:23:33.280 END TEST nvmf_fio_host 00:23:33.280 ************************************ 00:23:33.280 01:00:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.280 01:00:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:33.280 01:00:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.280 01:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.539 ************************************ 00:23:33.539 START TEST nvmf_failover 00:23:33.539 ************************************ 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.539 * Looking for test storage... 00:23:33.539 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.539 01:00:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.540 01:00:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:38.914 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:38.914 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:38.914 Found net devices under 0000:27:00.0: cvl_0_0 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:38.914 Found net devices under 0000:27:00.1: cvl_0_1 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.914 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.764 ms 00:23:38.915 00:23:38.915 --- 10.0.0.2 ping statistics --- 00:23:38.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.915 rtt min/avg/max/mdev = 0.764/0.764/0.764/0.000 ms 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.454 ms 00:23:38.915 00:23:38.915 --- 10.0.0.1 ping statistics --- 00:23:38.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.915 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3566326 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3566326 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3566326 ']' 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.915 01:00:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.915 [2024-05-15 01:00:25.581481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:38.915 [2024-05-15 01:00:25.581609] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.915 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.915 [2024-05-15 01:00:25.746166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.915 [2024-05-15 01:00:25.895360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.915 [2024-05-15 01:00:25.895417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.915 [2024-05-15 01:00:25.895432] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.915 [2024-05-15 01:00:25.895447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.915 [2024-05-15 01:00:25.895459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.915 [2024-05-15 01:00:25.895630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.915 [2024-05-15 01:00:25.895738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.915 [2024-05-15 01:00:25.895749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.482 01:00:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.483 01:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.483 [2024-05-15 01:00:26.435386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.483 01:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.741 Malloc0 00:23:39.741 01:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.001 01:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.001 01:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.259 [2024-05-15 01:00:27.064148] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:40.259 [2024-05-15 01:00:27.064511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.259 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.259 [2024-05-15 01:00:27.196499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.259 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:40.517 [2024-05-15 01:00:27.328675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3566657 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3566657 /var/tmp/bdevperf.sock 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3566657 ']' 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.517 01:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:41.081 01:00:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.081 01:00:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:41.081 01:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.340 NVMe0n1 00:23:41.340 01:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.910 00:23:41.910 01:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3566952 00:23:41.910 01:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:41.910 01:00:28 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.846 01:00:29 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.846 [2024-05-15 01:00:29.823427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.846 [2024-05-15 01:00:29.823785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.823999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 [2024-05-15 01:00:29.824145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:42.847 01:00:29 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.132 01:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.390 00:23:46.390 01:00:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.390 01:00:33 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:49.675 01:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.675 [2024-05-15 01:00:36.510481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.675 01:00:36 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:50.608 01:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:50.608 [2024-05-15 01:00:37.657572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.608 [2024-05-15 01:00:37.657682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:50.867 01:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3566952 00:23:57.444 0 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3566657 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3566657 ']' 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3566657 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3566657 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3566657' 00:23:57.444 killing process with pid 3566657 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3566657 00:23:57.444 01:00:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3566657 00:23:57.444 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.444 [2024-05-15 01:00:27.395212] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:57.444 [2024-05-15 01:00:27.395292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566657 ] 00:23:57.444 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.444 [2024-05-15 01:00:27.484483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.444 [2024-05-15 01:00:27.574847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.444 Running I/O for 15 seconds... 00:23:57.444 [2024-05-15 01:00:29.824534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-05-15 01:00:29.824962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.444 [2024-05-15 01:00:29.824972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.824979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.824989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.824998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.445 [2024-05-15 01:00:29.825673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-05-15 01:00:29.825680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.825985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.825993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-05-15 01:00:29.826292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-05-15 01:00:29.826310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-05-15 01:00:29.826390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-05-15 01:00:29.826397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:29.826835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.826844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(5) to be set 00:23:57.447 [2024-05-15 01:00:29.826858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.447 [2024-05-15 01:00:29.826866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.447 [2024-05-15 01:00:29.826876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:23:57.447 [2024-05-15 01:00:29.826885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.827012] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a3e80 was disconnected and freed. reset controller. 00:23:57.447 [2024-05-15 01:00:29.827033] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.447 [2024-05-15 01:00:29.827071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.447 [2024-05-15 01:00:29.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.827097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.447 [2024-05-15 01:00:29.827108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.827119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.447 [2024-05-15 01:00:29.827129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.827138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.447 [2024-05-15 01:00:29.827147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:29.827155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.447 [2024-05-15 01:00:29.827195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:57.447 [2024-05-15 01:00:29.829758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.447 [2024-05-15 01:00:29.992120] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.447 [2024-05-15 01:00:33.367735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.447 [2024-05-15 01:00:33.367932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-05-15 01:00:33.367939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.367949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.367958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.367967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.367975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.367985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.367993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-05-15 01:00:33.368390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.448 [2024-05-15 01:00:33.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-05-15 01:00:33.368547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.368983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.368990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-05-15 01:00:33.369251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-05-15 01:00:33.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-05-15 01:00:33.369763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.450 [2024-05-15 01:00:33.369978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.450 [2024-05-15 01:00:33.369984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.450 [2024-05-15 01:00:33.369991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:23:57.450 [2024-05-15 01:00:33.369999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.451 [2024-05-15 01:00:33.370272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.451 [2024-05-15 01:00:33.370279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82216 len:8 PRP1 0x0 PRP2 0x0 00:23:57.451 [2024-05-15 01:00:33.370287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370407] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4100 was disconnected and freed. reset controller. 00:23:57.451 [2024-05-15 01:00:33.370421] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:57.451 [2024-05-15 01:00:33.370450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.451 [2024-05-15 01:00:33.370459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.451 [2024-05-15 01:00:33.370481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.451 [2024-05-15 01:00:33.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.451 [2024-05-15 01:00:33.370518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:33.370526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.451 [2024-05-15 01:00:33.370581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:57.451 [2024-05-15 01:00:33.373387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.451 [2024-05-15 01:00:33.482375] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.451 [2024-05-15 01:00:37.658998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-05-15 01:00:37.659149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-05-15 01:00:37.659494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.451 [2024-05-15 01:00:37.659504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.659980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.659989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-05-15 01:00:37.660182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-05-15 01:00:37.660192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-05-15 01:00:37.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-05-15 01:00:37.660217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-05-15 01:00:37.660234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-05-15 01:00:37.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-05-15 01:00:37.660269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-05-15 01:00:37.660286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-05-15 01:00:37.660304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120464 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 01:00:37.660402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 01:00:37.660421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 01:00:37.660437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.453 [2024-05-15 01:00:37.660454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3480 is same with the state(5) to be set 00:23:57.453 [2024-05-15 01:00:37.660599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120472 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120488 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120496 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121192 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121200 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121208 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121216 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121224 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121232 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121240 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121248 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121256 len:8 PRP1 0x0 PRP2 0x0 00:23:57.453 [2024-05-15 01:00:37.660976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.453 [2024-05-15 01:00:37.660983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.453 [2024-05-15 01:00:37.660990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.453 [2024-05-15 01:00:37.660996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121264 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121272 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121280 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121288 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121296 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121304 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121312 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121320 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121328 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121336 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121344 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121352 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121360 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121368 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121376 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121384 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120504 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120512 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120520 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120528 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120536 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120544 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120552 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120560 len:8 PRP1 0x0 PRP2 0x0 00:23:57.454 [2024-05-15 01:00:37.661749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.454 [2024-05-15 01:00:37.661758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.454 [2024-05-15 01:00:37.661764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.454 [2024-05-15 01:00:37.661772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120568 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120576 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120584 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120592 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120600 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120608 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120616 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.661979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.661986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.661994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120624 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120632 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120640 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120648 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120656 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120664 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120672 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120680 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120688 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121392 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121400 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121408 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121416 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121424 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121432 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121440 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121448 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-05-15 01:00:37.662528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.455 [2024-05-15 01:00:37.662540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.455 [2024-05-15 01:00:37.662547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.455 [2024-05-15 01:00:37.662556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120696 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120704 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120712 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120720 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120728 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.662744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.662753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.662759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.662766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120736 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120744 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120752 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120760 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120768 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120784 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120792 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120800 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120808 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120816 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120824 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120832 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120840 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120848 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120856 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120864 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.456 [2024-05-15 01:00:37.666771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.456 [2024-05-15 01:00:37.666778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120872 len:8 PRP1 0x0 PRP2 0x0 00:23:57.456 [2024-05-15 01:00:37.666787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-05-15 01:00:37.666796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120880 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120888 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120896 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120904 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120912 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120920 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.666974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.666983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.666989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.666997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120928 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120936 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120944 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120952 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120960 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120968 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120976 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120984 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120992 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121000 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121008 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121016 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121024 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121032 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.457 [2024-05-15 01:00:37.667450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121040 len:8 PRP1 0x0 PRP2 0x0 00:23:57.457 [2024-05-15 01:00:37.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-05-15 01:00:37.667466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.457 [2024-05-15 01:00:37.667472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121048 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121056 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121064 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121072 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121080 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121088 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121096 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121104 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121112 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121120 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121128 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121136 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121144 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121152 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121160 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121168 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.667975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.667981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.667987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121176 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.668004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.668010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.668017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121184 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.668025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.668032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.668038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.668049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120448 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.668057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.668065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.668071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.668078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.668086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.668094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.458 [2024-05-15 01:00:37.668100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.458 [2024-05-15 01:00:37.668107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120464 len:8 PRP1 0x0 PRP2 0x0 00:23:57.458 [2024-05-15 01:00:37.668114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-05-15 01:00:37.668241] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a4600 was disconnected and freed. reset controller. 00:23:57.458 [2024-05-15 01:00:37.668255] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:57.458 [2024-05-15 01:00:37.668266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.458 [2024-05-15 01:00:37.670970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.458 [2024-05-15 01:00:37.671001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:23:57.458 [2024-05-15 01:00:37.860248] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.458 00:23:57.458 Latency(us) 00:23:57.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.458 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.458 Verification LBA range: start 0x0 length 0x4000 00:23:57.458 NVMe0n1 : 15.05 11176.02 43.66 1541.22 0.00 10018.51 407.44 44702.45 00:23:57.458 =================================================================================================================== 00:23:57.458 Total : 11176.02 43.66 1541.22 0.00 10018.51 407.44 44702.45 00:23:57.458 Received shutdown signal, test time was about 15.000000 seconds 00:23:57.458 00:23:57.458 Latency(us) 00:23:57.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.459 =================================================================================================================== 00:23:57.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3569912 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3569912 /var/tmp/bdevperf.sock 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3569912 ']' 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:57.459 01:00:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:58.025 01:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:58.025 01:00:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:23:58.025 01:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.285 [2024-05-15 01:00:45.195098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.285 01:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.285 [2024-05-15 01:00:45.339117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:58.545 01:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.545 NVMe0n1 00:23:58.545 01:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.806 00:23:58.806 01:00:45 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.067 00:23:59.067 01:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.067 01:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:59.330 01:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.330 01:00:46 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:02.620 01:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.620 01:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:02.620 01:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3570873 00:24:02.620 01:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3570873 00:24:02.620 01:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.555 0 00:24:03.555 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:03.555 [2024-05-15 01:00:44.345456] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:03.555 [2024-05-15 01:00:44.345592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569912 ] 00:24:03.555 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.555 [2024-05-15 01:00:44.462849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.555 [2024-05-15 01:00:44.558505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.555 [2024-05-15 01:00:46.318084] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:03.555 [2024-05-15 01:00:46.318151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.555 [2024-05-15 01:00:46.318167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.555 [2024-05-15 01:00:46.318179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.555 [2024-05-15 01:00:46.318187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.555 [2024-05-15 01:00:46.318196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.555 [2024-05-15 01:00:46.318204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.555 [2024-05-15 01:00:46.318213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.555 [2024-05-15 01:00:46.318221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.555 [2024-05-15 01:00:46.318230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.555 [2024-05-15 01:00:46.318276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.555 [2024-05-15 01:00:46.318298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:24:03.555 [2024-05-15 01:00:46.362340] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.555 Running I/O for 1 seconds... 00:24:03.555 00:24:03.555 Latency(us) 00:24:03.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.555 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:03.555 Verification LBA range: start 0x0 length 0x4000 00:24:03.555 NVMe0n1 : 1.00 11627.90 45.42 0.00 0.00 10964.72 651.05 10071.85 00:24:03.555 =================================================================================================================== 00:24:03.555 Total : 11627.90 45.42 0.00 0.00 10964.72 651.05 10071.85 00:24:03.555 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.555 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:03.815 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.073 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.073 01:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:04.073 01:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.332 01:00:51 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3569912 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3569912 ']' 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3569912 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3569912 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3569912' 00:24:07.661 killing process with pid 3569912 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3569912 00:24:07.661 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3569912 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.920 rmmod nvme_tcp 00:24:07.920 rmmod nvme_fabrics 00:24:07.920 rmmod nvme_keyring 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3566326 ']' 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3566326 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3566326 ']' 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3566326 00:24:07.920 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:08.180 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.180 01:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3566326 00:24:08.180 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:08.180 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:08.180 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3566326' 00:24:08.180 killing process with pid 3566326 00:24:08.180 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3566326 00:24:08.180 [2024-05-15 01:00:55.026986] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:08.180 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3566326 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.747 01:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.648 01:00:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.648 00:24:10.648 real 0m37.274s 00:24:10.648 user 1m59.665s 00:24:10.648 sys 0m6.454s 00:24:10.648 01:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:10.648 01:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:10.648 ************************************ 00:24:10.648 END TEST nvmf_failover 00:24:10.648 ************************************ 00:24:10.648 01:00:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.648 01:00:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:10.648 01:00:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.648 01:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.648 ************************************ 00:24:10.648 START TEST nvmf_host_discovery 00:24:10.648 ************************************ 00:24:10.648 01:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.906 * Looking for test storage... 00:24:10.906 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.906 01:00:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.183 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:16.184 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:16.184 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:16.184 Found net devices under 0000:27:00.0: cvl_0_0 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:16.184 Found net devices under 0000:27:00.1: cvl_0_1 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.184 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:24:16.445 00:24:16.445 --- 10.0.0.2 ping statistics --- 00:24:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.445 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:16.445 00:24:16.445 --- 10.0.0.1 ping statistics --- 00:24:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.445 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3575971 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3575971 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3575971 ']' 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.445 01:01:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.445 [2024-05-15 01:01:03.420970] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:16.445 [2024-05-15 01:01:03.421091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.705 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.705 [2024-05-15 01:01:03.553703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.705 [2024-05-15 01:01:03.646951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.705 [2024-05-15 01:01:03.646995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.705 [2024-05-15 01:01:03.647004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.705 [2024-05-15 01:01:03.647014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.705 [2024-05-15 01:01:03.647021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.705 [2024-05-15 01:01:03.647056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 [2024-05-15 01:01:04.179230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 [2024-05-15 01:01:04.191184] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:17.276 [2024-05-15 01:01:04.191457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 null0 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 null1 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3576185 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3576185 /tmp/host.sock 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3576185 ']' 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:17.276 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.276 [2024-05-15 01:01:04.312841] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:17.276 [2024-05-15 01:01:04.312976] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576185 ] 00:24:17.536 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.536 [2024-05-15 01:01:04.445817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.536 [2024-05-15 01:01:04.542965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.104 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 [2024-05-15 01:01:05.283676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.363 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.364 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.622 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.622 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:24:18.622 01:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:19.188 [2024-05-15 01:01:06.054877] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:19.188 [2024-05-15 01:01:06.054913] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:19.188 [2024-05-15 01:01:06.054942] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.188 [2024-05-15 01:01:06.140973] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:19.188 [2024-05-15 01:01:06.245134] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.188 [2024-05-15 01:01:06.245163] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.446 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.703 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.704 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 [2024-05-15 01:01:06.948388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.964 [2024-05-15 01:01:06.948862] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:19.964 [2024-05-15 01:01:06.948907] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.964 01:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.224 [2024-05-15 01:01:07.035243] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:20.224 01:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:20.485 [2024-05-15 01:01:07.340151] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:20.485 [2024-05-15 01:01:07.340181] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:20.485 [2024-05-15 01:01:07.340190] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.052 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 [2024-05-15 01:01:08.157214] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:21.312 [2024-05-15 01:01:08.157246] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.312 [2024-05-15 01:01:08.160807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 [2024-05-15 01:01:08.160832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.312 [2024-05-15 01:01:08.160844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.312 [2024-05-15 01:01:08.160852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.312 [2024-05-15 01:01:08.160861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.312 [2024-05-15 01:01:08.160869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.312 [2024-05-15 01:01:08.160877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.312 [2024-05-15 01:01:08.160885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.312 [2024-05-15 01:01:08.160893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.312 [2024-05-15 01:01:08.170787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 [2024-05-15 01:01:08.180801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.181156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.181262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.181274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.181285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.181300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.181327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.181340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.181351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.181370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 [2024-05-15 01:01:08.190845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.191194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.191391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.191402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.191410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.191423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.191439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.191447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.191455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.191466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 [2024-05-15 01:01:08.200888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.201149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.312 [2024-05-15 01:01:08.201490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.201503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.201512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.201526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.201590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.201600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.201609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.201623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 [2024-05-15 01:01:08.210934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.211197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.211423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.211434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.211445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.211461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.211481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.211490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.211500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.211515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 [2024-05-15 01:01:08.220984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.221129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.221239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.221250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.221260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.221273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.221285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.221293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.221302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.221314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 [2024-05-15 01:01:08.231026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.231266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.231581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.231591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.231599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.231611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.231626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.231633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.231641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.231651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 [2024-05-15 01:01:08.241067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.312 [2024-05-15 01:01:08.241287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.241527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.312 [2024-05-15 01:01:08.241537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3700 with addr=10.0.0.2, port=4420 00:24:21.312 [2024-05-15 01:01:08.241546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3700 is same with the state(5) to be set 00:24:21.312 [2024-05-15 01:01:08.241559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:24:21.312 [2024-05-15 01:01:08.241576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:21.312 [2024-05-15 01:01:08.241584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:21.312 [2024-05-15 01:01:08.241592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:21.312 [2024-05-15 01:01:08.241604] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:21.312 [2024-05-15 01:01:08.243663] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:21.312 [2024-05-15 01:01:08.243691] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.312 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.313 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.572 01:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.510 [2024-05-15 01:01:09.505817] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:22.510 [2024-05-15 01:01:09.505844] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:22.510 [2024-05-15 01:01:09.505868] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.768 [2024-05-15 01:01:09.631959] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:22.768 [2024-05-15 01:01:09.698821] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:22.768 [2024-05-15 01:01:09.698860] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.768 request: 00:24:22.768 { 00:24:22.768 "name": "nvme", 00:24:22.768 "trtype": "tcp", 00:24:22.768 "traddr": "10.0.0.2", 00:24:22.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.768 "adrfam": "ipv4", 00:24:22.768 "trsvcid": "8009", 00:24:22.768 "wait_for_attach": true, 00:24:22.768 "method": "bdev_nvme_start_discovery", 00:24:22.768 "req_id": 1 00:24:22.768 } 00:24:22.768 Got JSON-RPC error response 00:24:22.768 response: 00:24:22.768 { 00:24:22.768 "code": -17, 00:24:22.768 "message": "File exists" 00:24:22.768 } 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.768 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.769 request: 00:24:22.769 { 00:24:22.769 "name": "nvme_second", 00:24:22.769 "trtype": "tcp", 00:24:22.769 "traddr": "10.0.0.2", 00:24:22.769 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.769 "adrfam": "ipv4", 00:24:22.769 "trsvcid": "8009", 00:24:22.769 "wait_for_attach": true, 00:24:22.769 "method": "bdev_nvme_start_discovery", 00:24:22.769 "req_id": 1 00:24:22.769 } 00:24:22.769 Got JSON-RPC error response 00:24:22.769 response: 00:24:22.769 { 00:24:22.769 "code": -17, 00:24:22.769 "message": "File exists" 00:24:22.769 } 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.769 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.027 01:01:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 [2024-05-15 01:01:10.895454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.959 [2024-05-15 01:01:10.895818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.959 [2024-05-15 01:01:10.895832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4880 with addr=10.0.0.2, port=8010 00:24:23.959 [2024-05-15 01:01:10.895863] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.959 [2024-05-15 01:01:10.895875] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.959 [2024-05-15 01:01:10.895886] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:24.895 [2024-05-15 01:01:11.895375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 01:01:11.895616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 01:01:11.895627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4b00 with addr=10.0.0.2, port=8010 00:24:24.895 [2024-05-15 01:01:11.895653] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:24.895 [2024-05-15 01:01:11.895662] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:24.895 [2024-05-15 01:01:11.895671] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:26.271 [2024-05-15 01:01:12.895095] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:26.271 request: 00:24:26.271 { 00:24:26.271 "name": "nvme_second", 00:24:26.271 "trtype": "tcp", 00:24:26.271 "traddr": "10.0.0.2", 00:24:26.271 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:26.271 "adrfam": "ipv4", 00:24:26.271 "trsvcid": "8010", 00:24:26.271 "attach_timeout_ms": 3000, 00:24:26.271 "method": "bdev_nvme_start_discovery", 00:24:26.271 "req_id": 1 00:24:26.271 } 00:24:26.271 Got JSON-RPC error response 00:24:26.271 response: 00:24:26.271 { 00:24:26.271 "code": -110, 00:24:26.271 "message": "Connection timed out" 00:24:26.271 } 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3576185 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.271 01:01:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.271 rmmod nvme_tcp 00:24:26.271 rmmod nvme_fabrics 00:24:26.271 rmmod nvme_keyring 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3575971 ']' 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3575971 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3575971 ']' 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3575971 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575971 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575971' 00:24:26.271 killing process with pid 3575971 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3575971 00:24:26.271 [2024-05-15 01:01:13.065363] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:26.271 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3575971 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.534 01:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.075 00:24:29.075 real 0m17.883s 00:24:29.075 user 0m21.796s 00:24:29.075 sys 0m5.496s 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.075 ************************************ 00:24:29.075 END TEST nvmf_host_discovery 00:24:29.075 ************************************ 00:24:29.075 01:01:15 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:29.075 01:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:29.075 01:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:29.075 01:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.075 ************************************ 00:24:29.075 START TEST nvmf_host_multipath_status 00:24:29.075 ************************************ 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:29.075 * Looking for test storage... 00:24:29.075 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/bpftrace.sh 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.075 01:01:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:34.424 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:34.425 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:34.425 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:34.425 Found net devices under 0000:27:00.0: cvl_0_0 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:34.425 Found net devices under 0000:27:00.1: cvl_0_1 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.425 01:01:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:34.425 00:24:34.425 --- 10.0.0.2 ping statistics --- 00:24:34.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.425 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:24:34.425 00:24:34.425 --- 10.0.0.1 ping statistics --- 00:24:34.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.425 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.425 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3582005 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3582005 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3582005 ']' 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.426 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:34.426 [2024-05-15 01:01:21.241481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:34.426 [2024-05-15 01:01:21.241582] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.426 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.426 [2024-05-15 01:01:21.359309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.684 [2024-05-15 01:01:21.452703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.684 [2024-05-15 01:01:21.452743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.684 [2024-05-15 01:01:21.452752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.684 [2024-05-15 01:01:21.452761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.684 [2024-05-15 01:01:21.452769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.684 [2024-05-15 01:01:21.452848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.684 [2024-05-15 01:01:21.452875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3582005 00:24:34.943 01:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.203 [2024-05-15 01:01:22.113040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.203 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:35.463 Malloc0 00:24:35.463 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:35.463 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.722 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.722 [2024-05-15 01:01:22.742839] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:35.722 [2024-05-15 01:01:22.743148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.722 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.980 [2024-05-15 01:01:22.883056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3582333 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3582333 /var/tmp/bdevperf.sock 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3582333 ']' 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:35.980 01:01:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:36.915 01:01:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.915 01:01:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:24:36.915 01:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:36.915 01:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:37.175 Nvme0n1 00:24:37.175 01:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:37.756 Nvme0n1 00:24:37.756 01:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:37.756 01:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:39.659 01:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:39.659 01:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.917 01:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.917 01:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:41.295 01:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:41.295 01:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.295 01:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.296 01:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.296 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.554 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.554 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.554 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.554 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.811 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.068 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.068 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:42.068 01:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.068 01:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.325 01:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:43.263 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:43.263 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.263 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.263 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.522 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.779 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.037 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.037 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.037 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.037 01:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.037 01:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.037 01:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:44.037 01:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.294 01:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:44.294 01:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.672 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.930 01:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:46.189 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:46.448 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:46.707 01:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.641 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.898 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.898 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.898 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.898 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.157 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.157 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.157 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.157 01:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.157 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.157 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:48.157 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.157 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:48.416 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.416 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:48.416 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.416 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:48.417 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.417 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:48.417 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:48.677 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:48.677 01:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:50.054 01:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.054 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.054 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.054 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.054 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.314 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.314 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:50.314 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.314 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:50.573 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.833 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.833 01:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:52.222 01:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:52.222 01:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.222 01:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.222 01:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.223 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.485 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.745 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:53.004 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:53.004 01:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:53.263 01:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:53.263 01:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:54.201 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:54.201 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.201 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.201 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.461 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.461 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.461 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.461 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.721 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.980 01:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.249 01:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.249 01:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:55.249 01:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:55.249 01:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.574 01:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.513 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.773 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.773 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.773 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.773 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.031 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.031 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.031 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.031 01:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.031 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.031 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.031 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.031 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:57.289 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.547 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:57.547 01:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.925 01:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.184 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:59.443 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:59.702 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:59.702 01:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.081 01:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.081 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.081 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.081 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.081 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.341 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3582333 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3582333 ']' 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3582333 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3582333 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3582333' 00:25:01.600 killing process with pid 3582333 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3582333 00:25:01.600 01:01:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3582333 00:25:01.858 Connection closed with partial response: 00:25:01.858 00:25:01.858 00:25:02.156 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3582333 00:25:02.156 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.156 [2024-05-15 01:01:22.947940] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:02.156 [2024-05-15 01:01:22.948026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582333 ] 00:25:02.156 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.156 [2024-05-15 01:01:23.036160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.156 [2024-05-15 01:01:23.131703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.156 Running I/O for 90 seconds... 00:25:02.156 [2024-05-15 01:01:35.518995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.156 [2024-05-15 01:01:35.519068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.156 [2024-05-15 01:01:35.519326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.156 [2024-05-15 01:01:35.519340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.519716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.519723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.157 [2024-05-15 01:01:35.520490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.157 [2024-05-15 01:01:35.520505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.520905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.520913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.158 [2024-05-15 01:01:35.521727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.158 [2024-05-15 01:01:35.521741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.521985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.521993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.159 [2024-05-15 01:01:35.522932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.159 [2024-05-15 01:01:35.522946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.159 [2024-05-15 01:01:35.522954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.522968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.522976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.522989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.522997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.523984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.160 [2024-05-15 01:01:35.524150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.160 [2024-05-15 01:01:35.524164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.161 [2024-05-15 01:01:35.524911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.161 [2024-05-15 01:01:35.524925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.524933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.524947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.524968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.524976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.524990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.524998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.525984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.525998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.162 [2024-05-15 01:01:35.526205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.162 [2024-05-15 01:01:35.526219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.163 [2024-05-15 01:01:35.526973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.526987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.526995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.163 [2024-05-15 01:01:35.527522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.163 [2024-05-15 01:01:35.527530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.527931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.527938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.164 [2024-05-15 01:01:35.528777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.164 [2024-05-15 01:01:35.528791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.528990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.528998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.165 [2024-05-15 01:01:35.529517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.165 [2024-05-15 01:01:35.529531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.529538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.529553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.529561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.166 [2024-05-15 01:01:35.530689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.166 [2024-05-15 01:01:35.530890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.166 [2024-05-15 01:01:35.530904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.530924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.530932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.530945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.530952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.530966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.530974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.530988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.530996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.531981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.531989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.167 [2024-05-15 01:01:35.532207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.167 [2024-05-15 01:01:35.532220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.168 [2024-05-15 01:01:35.533535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.168 [2024-05-15 01:01:35.533549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.533982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.533990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.169 [2024-05-15 01:01:35.534275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.169 [2024-05-15 01:01:35.534295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.169 [2024-05-15 01:01:35.534308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.170 [2024-05-15 01:01:35.534404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.534550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.534558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.538982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.538995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.170 [2024-05-15 01:01:35.539177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.170 [2024-05-15 01:01:35.539193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.539983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.539997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.540005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.540026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.540039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.540054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.171 [2024-05-15 01:01:35.540068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.171 [2024-05-15 01:01:35.540077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.540989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.540997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.172 [2024-05-15 01:01:35.541568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.172 [2024-05-15 01:01:35.541584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.172 [2024-05-15 01:01:35.541595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.173 [2024-05-15 01:01:35.541856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.541981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.173 [2024-05-15 01:01:35.542925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.173 [2024-05-15 01:01:35.542933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.542946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.542954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.542969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.542977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.542991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.174 [2024-05-15 01:01:35.543704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.174 [2024-05-15 01:01:35.543718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.543970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.544979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.544996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.175 [2024-05-15 01:01:35.545141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.175 [2024-05-15 01:01:35.545155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.176 [2024-05-15 01:01:35.545730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.545919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.545926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.546452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.546476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.546500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.546522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.176 [2024-05-15 01:01:35.546544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.176 [2024-05-15 01:01:35.546557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.546986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.546993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.177 [2024-05-15 01:01:35.547408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.177 [2024-05-15 01:01:35.547421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.547854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.547862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.178 [2024-05-15 01:01:35.548691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.178 [2024-05-15 01:01:35.548704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.548985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.548992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.179 [2024-05-15 01:01:35.549536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.179 [2024-05-15 01:01:35.549558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.179 [2024-05-15 01:01:35.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.549772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.549780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.180 [2024-05-15 01:01:35.550960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.180 [2024-05-15 01:01:35.550974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.550982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.550995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.551697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.551707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.181 [2024-05-15 01:01:35.552390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.181 [2024-05-15 01:01:35.552405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.552999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.182 [2024-05-15 01:01:35.553148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.182 [2024-05-15 01:01:35.553155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.183 [2024-05-15 01:01:35.553449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.553666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.183 [2024-05-15 01:01:35.554540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.183 [2024-05-15 01:01:35.554549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.554992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.554999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.184 [2024-05-15 01:01:35.555403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.184 [2024-05-15 01:01:35.555410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.555534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.555544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.185 [2024-05-15 01:01:35.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.185 [2024-05-15 01:01:35.556829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.556981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.556989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.186 [2024-05-15 01:01:35.557286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.557477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.557485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.186 [2024-05-15 01:01:35.558217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.186 [2024-05-15 01:01:35.558230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.187 [2024-05-15 01:01:35.558950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.187 [2024-05-15 01:01:35.558964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.558972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.558985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.558994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.559994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.188 [2024-05-15 01:01:35.560379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.188 [2024-05-15 01:01:35.560387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.560853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.560978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.560986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.561008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-05-15 01:01:35.561139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.189 [2024-05-15 01:01:35.561282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.189 [2024-05-15 01:01:35.561290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.561985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.561992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.190 [2024-05-15 01:01:35.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.190 [2024-05-15 01:01:35.562686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.562977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.562984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.191 [2024-05-15 01:01:35.563982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.191 [2024-05-15 01:01:35.563990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.192 [2024-05-15 01:01:35.564742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.192 [2024-05-15 01:01:35.564871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.192 [2024-05-15 01:01:35.564878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.564892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.193 [2024-05-15 01:01:35.564900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.564916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.564923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.564936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.564944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.564957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.564965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.564979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.564987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.193 [2024-05-15 01:01:35.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.565983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.565990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.193 [2024-05-15 01:01:35.566272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.193 [2024-05-15 01:01:35.566285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.566986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.566994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.567007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.567015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.567031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.567039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.567055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.567065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.567084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.567092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.570623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.570632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.194 [2024-05-15 01:01:35.571288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.194 [2024-05-15 01:01:35.571296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.572019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.572056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.572069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.195 [2024-05-15 01:01:35.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.195 [2024-05-15 01:01:35.572098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.196 [2024-05-15 01:01:35.572418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.572542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.572550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.196 [2024-05-15 01:01:35.573432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.196 [2024-05-15 01:01:35.573445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.573982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.573989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.574003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.574012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.574026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.574033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.574050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.197 [2024-05-15 01:01:35.574059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.197 [2024-05-15 01:01:35.574072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.574981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.574988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.198 [2024-05-15 01:01:35.575189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.198 [2024-05-15 01:01:35.575206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.575926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.575977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.575994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.576001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.576018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.576025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.576042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.576056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.576073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.199 [2024-05-15 01:01:35.576081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.576105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.199 [2024-05-15 01:01:35.576121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.199 [2024-05-15 01:01:35.576129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:35.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.200 [2024-05-15 01:01:35.576153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:35.576169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.200 [2024-05-15 01:01:35.576177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:35.576198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.200 [2024-05-15 01:01:35.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:35.576332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.200 [2024-05-15 01:01:35.576340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.200 [2024-05-15 01:01:46.724818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.724904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.724955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.724979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.724996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.200 [2024-05-15 01:01:46.725639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.200 [2024-05-15 01:01:46.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.725985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.725993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.201 [2024-05-15 01:01:46.726425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.201 [2024-05-15 01:01:46.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.201 [2024-05-15 01:01:46.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.201 [2024-05-15 01:01:46.726541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.201 [2024-05-15 01:01:46.726550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.201 Received shutdown signal, test time was about 23.945483 seconds 00:25:02.201 00:25:02.201 Latency(us) 00:25:02.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.201 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.201 Verification LBA range: start 0x0 length 0x4000 00:25:02.201 Nvme0n1 : 23.94 11017.03 43.04 0.00 0.00 11598.77 556.19 3072879.56 00:25:02.201 =================================================================================================================== 00:25:02.201 Total : 11017.03 43.04 0.00 0.00 11598.77 556.19 3072879.56 00:25:02.201 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.462 rmmod nvme_tcp 00:25:02.462 rmmod nvme_fabrics 00:25:02.462 rmmod nvme_keyring 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3582005 ']' 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3582005 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3582005 ']' 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3582005 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3582005 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3582005' 00:25:02.462 killing process with pid 3582005 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3582005 00:25:02.462 [2024-05-15 01:01:49.310819] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:02.462 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3582005 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.030 01:01:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.931 01:01:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.931 00:25:04.931 real 0m36.290s 00:25:04.931 user 1m34.686s 00:25:04.931 sys 0m8.795s 00:25:04.931 01:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:04.931 01:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:04.931 ************************************ 00:25:04.931 END TEST nvmf_host_multipath_status 00:25:04.931 ************************************ 00:25:04.932 01:01:51 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:04.932 01:01:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:04.932 01:01:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:04.932 01:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:04.932 ************************************ 00:25:04.932 START TEST nvmf_discovery_remove_ifc 00:25:04.932 ************************************ 00:25:04.932 01:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:05.191 * Looking for test storage... 00:25:05.191 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.191 01:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:10.475 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.475 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:10.475 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:10.476 Found net devices under 0000:27:00.0: cvl_0_0 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:10.476 Found net devices under 0000:27:00.1: cvl_0_1 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:10.476 00:25:10.476 --- 10.0.0.2 ping statistics --- 00:25:10.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.476 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:25:10.476 00:25:10.476 --- 10.0.0.1 ping statistics --- 00:25:10.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.476 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3591440 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3591440 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3591440 ']' 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:10.476 01:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.735 [2024-05-15 01:01:57.564955] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:10.735 [2024-05-15 01:01:57.565034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.735 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.735 [2024-05-15 01:01:57.667024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.735 [2024-05-15 01:01:57.760344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.735 [2024-05-15 01:01:57.760384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.735 [2024-05-15 01:01:57.760392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.735 [2024-05-15 01:01:57.760401] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.735 [2024-05-15 01:01:57.760409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.735 [2024-05-15 01:01:57.760435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:11.298 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.299 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.299 [2024-05-15 01:01:58.310473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.299 [2024-05-15 01:01:58.318412] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:11.299 [2024-05-15 01:01:58.318655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:11.299 null0 00:25:11.299 [2024-05-15 01:01:58.350531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3591746 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3591746 /tmp/host.sock 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3591746 ']' 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:11.558 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:11.558 01:01:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.558 [2024-05-15 01:01:58.444884] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:11.558 [2024-05-15 01:01:58.444987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591746 ] 00:25:11.558 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.558 [2024-05-15 01:01:58.557094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.817 [2024-05-15 01:01:58.649419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.385 01:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.401 [2024-05-15 01:02:00.364266] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:13.401 [2024-05-15 01:02:00.364300] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:13.401 [2024-05-15 01:02:00.364331] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:13.401 [2024-05-15 01:02:00.452381] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:13.659 [2024-05-15 01:02:00.640596] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:13.660 [2024-05-15 01:02:00.640656] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:13.660 [2024-05-15 01:02:00.640690] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:13.660 [2024-05-15 01:02:00.640712] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:13.660 [2024-05-15 01:02:00.640741] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:13.660 [2024-05-15 01:02:00.642555] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a3c00 was disconnected and freed. delete nvme_qpair. 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:13.660 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.919 01:02:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:14.855 01:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.246 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.246 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.246 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.247 01:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.181 01:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:17.181 01:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.113 01:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.047 [2024-05-15 01:02:06.068749] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:19.047 [2024-05-15 01:02:06.068808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.047 [2024-05-15 01:02:06.068822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.047 [2024-05-15 01:02:06.068835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.047 [2024-05-15 01:02:06.068848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.047 [2024-05-15 01:02:06.068858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.047 [2024-05-15 01:02:06.068866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.047 [2024-05-15 01:02:06.068876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.047 [2024-05-15 01:02:06.068884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.047 [2024-05-15 01:02:06.068894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.047 [2024-05-15 01:02:06.068903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.047 [2024-05-15 01:02:06.068912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3980 is same with the state(5) to be set 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.047 [2024-05-15 01:02:06.078743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:25:19.047 [2024-05-15 01:02:06.088761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.047 01:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 01:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.420 [2024-05-15 01:02:07.133090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:21.355 [2024-05-15 01:02:08.157087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:21.355 [2024-05-15 01:02:08.157158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3980 with addr=10.0.0.2, port=4420 00:25:21.355 [2024-05-15 01:02:08.157184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3980 is same with the state(5) to be set 00:25:21.355 [2024-05-15 01:02:08.157815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3980 (9): Bad file descriptor 00:25:21.355 [2024-05-15 01:02:08.157853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.355 [2024-05-15 01:02:08.157902] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:21.355 [2024-05-15 01:02:08.157944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.355 [2024-05-15 01:02:08.157965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.355 [2024-05-15 01:02:08.157987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.355 [2024-05-15 01:02:08.158002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.355 [2024-05-15 01:02:08.158020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.355 [2024-05-15 01:02:08.158036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.355 [2024-05-15 01:02:08.158072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.355 [2024-05-15 01:02:08.158087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.355 [2024-05-15 01:02:08.158103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.355 [2024-05-15 01:02:08.158117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.355 [2024-05-15 01:02:08.158133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:21.355 [2024-05-15 01:02:08.158248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:25:21.355 [2024-05-15 01:02:08.159284] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:21.355 [2024-05-15 01:02:08.159302] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:21.355 01:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.355 01:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.355 01:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.290 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.550 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:22.550 01:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.484 [2024-05-15 01:02:10.206430] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.484 [2024-05-15 01:02:10.206460] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.484 [2024-05-15 01:02:10.206482] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.484 [2024-05-15 01:02:10.294544] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:23.484 01:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.484 [2024-05-15 01:02:10.523524] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:23.484 [2024-05-15 01:02:10.523576] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:23.484 [2024-05-15 01:02:10.523607] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:23.484 [2024-05-15 01:02:10.523627] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:23.484 [2024-05-15 01:02:10.523640] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:23.484 [2024-05-15 01:02:10.524880] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a4380 was disconnected and freed. delete nvme_qpair. 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3591746 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3591746 ']' 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3591746 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.422 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3591746 00:25:24.690 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:24.690 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:24.690 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3591746' 00:25:24.690 killing process with pid 3591746 00:25:24.690 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3591746 00:25:24.690 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3591746 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.949 rmmod nvme_tcp 00:25:24.949 rmmod nvme_fabrics 00:25:24.949 rmmod nvme_keyring 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3591440 ']' 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3591440 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3591440 ']' 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3591440 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3591440 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3591440' 00:25:24.949 killing process with pid 3591440 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3591440 00:25:24.949 [2024-05-15 01:02:11.972529] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:24.949 01:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3591440 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.514 01:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.436 01:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.436 00:25:27.436 real 0m22.504s 00:25:27.436 user 0m28.102s 00:25:27.436 sys 0m5.190s 00:25:27.436 01:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:27.436 01:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.436 ************************************ 00:25:27.436 END TEST nvmf_discovery_remove_ifc 00:25:27.436 ************************************ 00:25:27.436 01:02:14 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:27.436 01:02:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:27.436 01:02:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:27.436 01:02:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.704 ************************************ 00:25:27.704 START TEST nvmf_identify_kernel_target 00:25:27.704 ************************************ 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:27.704 * Looking for test storage... 00:25:27.704 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.704 01:02:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:32.976 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:32.976 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.976 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:32.977 Found net devices under 0000:27:00.0: cvl_0_0 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:32.977 Found net devices under 0000:27:00.1: cvl_0_1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:32.977 00:25:32.977 --- 10.0.0.2 ping statistics --- 00:25:32.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.977 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:25:32.977 00:25:32.977 --- 10.0.0.1 ping statistics --- 00:25:32.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.977 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:32.977 01:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:32.977 01:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:32.977 01:02:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:25:36.265 Waiting for block devices as requested 00:25:36.265 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:25:36.265 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.265 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.265 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.265 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.265 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.265 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.265 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.265 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.524 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.524 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.524 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.524 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.781 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.781 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:25:36.781 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:36.781 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:25:37.090 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:37.090 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:37.383 No valid GPT data, bailing 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:37.383 No valid GPT data, bailing 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:25:37.383 00:25:37.383 Discovery Log Number of Records 2, Generation counter 2 00:25:37.383 =====Discovery Log Entry 0====== 00:25:37.383 trtype: tcp 00:25:37.383 adrfam: ipv4 00:25:37.383 subtype: current discovery subsystem 00:25:37.383 treq: not specified, sq flow control disable supported 00:25:37.383 portid: 1 00:25:37.383 trsvcid: 4420 00:25:37.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:37.383 traddr: 10.0.0.1 00:25:37.383 eflags: none 00:25:37.383 sectype: none 00:25:37.383 =====Discovery Log Entry 1====== 00:25:37.383 trtype: tcp 00:25:37.383 adrfam: ipv4 00:25:37.383 subtype: nvme subsystem 00:25:37.383 treq: not specified, sq flow control disable supported 00:25:37.383 portid: 1 00:25:37.383 trsvcid: 4420 00:25:37.383 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:37.383 traddr: 10.0.0.1 00:25:37.383 eflags: none 00:25:37.383 sectype: none 00:25:37.383 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:37.383 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:37.383 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.383 ===================================================== 00:25:37.383 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:37.383 ===================================================== 00:25:37.383 Controller Capabilities/Features 00:25:37.384 ================================ 00:25:37.384 Vendor ID: 0000 00:25:37.384 Subsystem Vendor ID: 0000 00:25:37.384 Serial Number: c7fe811df05c71531de4 00:25:37.384 Model Number: Linux 00:25:37.384 Firmware Version: 6.7.0-68 00:25:37.384 Recommended Arb Burst: 0 00:25:37.384 IEEE OUI Identifier: 00 00 00 00:25:37.384 Multi-path I/O 00:25:37.384 May have multiple subsystem ports: No 00:25:37.384 May have multiple controllers: No 00:25:37.384 Associated with SR-IOV VF: No 00:25:37.384 Max Data Transfer Size: Unlimited 00:25:37.384 Max Number of Namespaces: 0 00:25:37.384 Max Number of I/O Queues: 1024 00:25:37.384 NVMe Specification Version (VS): 1.3 00:25:37.384 NVMe Specification Version (Identify): 1.3 00:25:37.384 Maximum Queue Entries: 1024 00:25:37.384 Contiguous Queues Required: No 00:25:37.384 Arbitration Mechanisms Supported 00:25:37.384 Weighted Round Robin: Not Supported 00:25:37.384 Vendor Specific: Not Supported 00:25:37.384 Reset Timeout: 7500 ms 00:25:37.384 Doorbell Stride: 4 bytes 00:25:37.384 NVM Subsystem Reset: Not Supported 00:25:37.384 Command Sets Supported 00:25:37.384 NVM Command Set: Supported 00:25:37.384 Boot Partition: Not Supported 00:25:37.384 Memory Page Size Minimum: 4096 bytes 00:25:37.384 Memory Page Size Maximum: 4096 bytes 00:25:37.384 Persistent Memory Region: Not Supported 00:25:37.384 Optional Asynchronous Events Supported 00:25:37.384 Namespace Attribute Notices: Not Supported 00:25:37.384 Firmware Activation Notices: Not Supported 00:25:37.384 ANA Change Notices: Not Supported 00:25:37.384 PLE Aggregate Log Change Notices: Not Supported 00:25:37.384 LBA Status Info Alert Notices: Not Supported 00:25:37.384 EGE Aggregate Log Change Notices: Not Supported 00:25:37.384 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.384 Zone Descriptor Change Notices: Not Supported 00:25:37.384 Discovery Log Change Notices: Supported 00:25:37.384 Controller Attributes 00:25:37.384 128-bit Host Identifier: Not Supported 00:25:37.384 Non-Operational Permissive Mode: Not Supported 00:25:37.384 NVM Sets: Not Supported 00:25:37.384 Read Recovery Levels: Not Supported 00:25:37.384 Endurance Groups: Not Supported 00:25:37.384 Predictable Latency Mode: Not Supported 00:25:37.384 Traffic Based Keep ALive: Not Supported 00:25:37.384 Namespace Granularity: Not Supported 00:25:37.384 SQ Associations: Not Supported 00:25:37.384 UUID List: Not Supported 00:25:37.384 Multi-Domain Subsystem: Not Supported 00:25:37.384 Fixed Capacity Management: Not Supported 00:25:37.384 Variable Capacity Management: Not Supported 00:25:37.384 Delete Endurance Group: Not Supported 00:25:37.384 Delete NVM Set: Not Supported 00:25:37.384 Extended LBA Formats Supported: Not Supported 00:25:37.384 Flexible Data Placement Supported: Not Supported 00:25:37.384 00:25:37.384 Controller Memory Buffer Support 00:25:37.384 ================================ 00:25:37.384 Supported: No 00:25:37.384 00:25:37.384 Persistent Memory Region Support 00:25:37.384 ================================ 00:25:37.384 Supported: No 00:25:37.384 00:25:37.384 Admin Command Set Attributes 00:25:37.384 ============================ 00:25:37.384 Security Send/Receive: Not Supported 00:25:37.384 Format NVM: Not Supported 00:25:37.384 Firmware Activate/Download: Not Supported 00:25:37.384 Namespace Management: Not Supported 00:25:37.384 Device Self-Test: Not Supported 00:25:37.384 Directives: Not Supported 00:25:37.384 NVMe-MI: Not Supported 00:25:37.384 Virtualization Management: Not Supported 00:25:37.384 Doorbell Buffer Config: Not Supported 00:25:37.384 Get LBA Status Capability: Not Supported 00:25:37.384 Command & Feature Lockdown Capability: Not Supported 00:25:37.384 Abort Command Limit: 1 00:25:37.384 Async Event Request Limit: 1 00:25:37.384 Number of Firmware Slots: N/A 00:25:37.384 Firmware Slot 1 Read-Only: N/A 00:25:37.384 Firmware Activation Without Reset: N/A 00:25:37.384 Multiple Update Detection Support: N/A 00:25:37.384 Firmware Update Granularity: No Information Provided 00:25:37.384 Per-Namespace SMART Log: No 00:25:37.384 Asymmetric Namespace Access Log Page: Not Supported 00:25:37.384 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:37.384 Command Effects Log Page: Not Supported 00:25:37.384 Get Log Page Extended Data: Supported 00:25:37.384 Telemetry Log Pages: Not Supported 00:25:37.384 Persistent Event Log Pages: Not Supported 00:25:37.384 Supported Log Pages Log Page: May Support 00:25:37.384 Commands Supported & Effects Log Page: Not Supported 00:25:37.384 Feature Identifiers & Effects Log Page:May Support 00:25:37.384 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.384 Data Area 4 for Telemetry Log: Not Supported 00:25:37.384 Error Log Page Entries Supported: 1 00:25:37.384 Keep Alive: Not Supported 00:25:37.384 00:25:37.384 NVM Command Set Attributes 00:25:37.384 ========================== 00:25:37.384 Submission Queue Entry Size 00:25:37.384 Max: 1 00:25:37.384 Min: 1 00:25:37.384 Completion Queue Entry Size 00:25:37.384 Max: 1 00:25:37.384 Min: 1 00:25:37.384 Number of Namespaces: 0 00:25:37.384 Compare Command: Not Supported 00:25:37.384 Write Uncorrectable Command: Not Supported 00:25:37.384 Dataset Management Command: Not Supported 00:25:37.384 Write Zeroes Command: Not Supported 00:25:37.384 Set Features Save Field: Not Supported 00:25:37.384 Reservations: Not Supported 00:25:37.384 Timestamp: Not Supported 00:25:37.384 Copy: Not Supported 00:25:37.384 Volatile Write Cache: Not Present 00:25:37.384 Atomic Write Unit (Normal): 1 00:25:37.384 Atomic Write Unit (PFail): 1 00:25:37.384 Atomic Compare & Write Unit: 1 00:25:37.384 Fused Compare & Write: Not Supported 00:25:37.384 Scatter-Gather List 00:25:37.384 SGL Command Set: Supported 00:25:37.384 SGL Keyed: Not Supported 00:25:37.384 SGL Bit Bucket Descriptor: Not Supported 00:25:37.384 SGL Metadata Pointer: Not Supported 00:25:37.384 Oversized SGL: Not Supported 00:25:37.384 SGL Metadata Address: Not Supported 00:25:37.384 SGL Offset: Supported 00:25:37.384 Transport SGL Data Block: Not Supported 00:25:37.384 Replay Protected Memory Block: Not Supported 00:25:37.384 00:25:37.384 Firmware Slot Information 00:25:37.384 ========================= 00:25:37.384 Active slot: 0 00:25:37.384 00:25:37.384 00:25:37.384 Error Log 00:25:37.384 ========= 00:25:37.384 00:25:37.384 Active Namespaces 00:25:37.384 ================= 00:25:37.384 Discovery Log Page 00:25:37.384 ================== 00:25:37.384 Generation Counter: 2 00:25:37.384 Number of Records: 2 00:25:37.384 Record Format: 0 00:25:37.384 00:25:37.384 Discovery Log Entry 0 00:25:37.384 ---------------------- 00:25:37.384 Transport Type: 3 (TCP) 00:25:37.384 Address Family: 1 (IPv4) 00:25:37.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:37.384 Entry Flags: 00:25:37.384 Duplicate Returned Information: 0 00:25:37.384 Explicit Persistent Connection Support for Discovery: 0 00:25:37.384 Transport Requirements: 00:25:37.384 Secure Channel: Not Specified 00:25:37.384 Port ID: 1 (0x0001) 00:25:37.384 Controller ID: 65535 (0xffff) 00:25:37.384 Admin Max SQ Size: 32 00:25:37.384 Transport Service Identifier: 4420 00:25:37.384 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:37.384 Transport Address: 10.0.0.1 00:25:37.384 Discovery Log Entry 1 00:25:37.384 ---------------------- 00:25:37.384 Transport Type: 3 (TCP) 00:25:37.384 Address Family: 1 (IPv4) 00:25:37.384 Subsystem Type: 2 (NVM Subsystem) 00:25:37.384 Entry Flags: 00:25:37.384 Duplicate Returned Information: 0 00:25:37.384 Explicit Persistent Connection Support for Discovery: 0 00:25:37.384 Transport Requirements: 00:25:37.384 Secure Channel: Not Specified 00:25:37.384 Port ID: 1 (0x0001) 00:25:37.384 Controller ID: 65535 (0xffff) 00:25:37.384 Admin Max SQ Size: 32 00:25:37.384 Transport Service Identifier: 4420 00:25:37.384 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:37.384 Transport Address: 10.0.0.1 00:25:37.384 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:37.384 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.384 get_feature(0x01) failed 00:25:37.384 get_feature(0x02) failed 00:25:37.384 get_feature(0x04) failed 00:25:37.384 ===================================================== 00:25:37.384 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:37.384 ===================================================== 00:25:37.384 Controller Capabilities/Features 00:25:37.384 ================================ 00:25:37.384 Vendor ID: 0000 00:25:37.384 Subsystem Vendor ID: 0000 00:25:37.384 Serial Number: e18e5003448b68fc996c 00:25:37.384 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:37.384 Firmware Version: 6.7.0-68 00:25:37.384 Recommended Arb Burst: 6 00:25:37.384 IEEE OUI Identifier: 00 00 00 00:25:37.384 Multi-path I/O 00:25:37.384 May have multiple subsystem ports: Yes 00:25:37.384 May have multiple controllers: Yes 00:25:37.384 Associated with SR-IOV VF: No 00:25:37.385 Max Data Transfer Size: Unlimited 00:25:37.385 Max Number of Namespaces: 1024 00:25:37.385 Max Number of I/O Queues: 128 00:25:37.385 NVMe Specification Version (VS): 1.3 00:25:37.385 NVMe Specification Version (Identify): 1.3 00:25:37.385 Maximum Queue Entries: 1024 00:25:37.385 Contiguous Queues Required: No 00:25:37.385 Arbitration Mechanisms Supported 00:25:37.385 Weighted Round Robin: Not Supported 00:25:37.385 Vendor Specific: Not Supported 00:25:37.385 Reset Timeout: 7500 ms 00:25:37.385 Doorbell Stride: 4 bytes 00:25:37.385 NVM Subsystem Reset: Not Supported 00:25:37.385 Command Sets Supported 00:25:37.385 NVM Command Set: Supported 00:25:37.385 Boot Partition: Not Supported 00:25:37.385 Memory Page Size Minimum: 4096 bytes 00:25:37.385 Memory Page Size Maximum: 4096 bytes 00:25:37.385 Persistent Memory Region: Not Supported 00:25:37.385 Optional Asynchronous Events Supported 00:25:37.385 Namespace Attribute Notices: Supported 00:25:37.385 Firmware Activation Notices: Not Supported 00:25:37.385 ANA Change Notices: Supported 00:25:37.385 PLE Aggregate Log Change Notices: Not Supported 00:25:37.385 LBA Status Info Alert Notices: Not Supported 00:25:37.385 EGE Aggregate Log Change Notices: Not Supported 00:25:37.385 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.385 Zone Descriptor Change Notices: Not Supported 00:25:37.385 Discovery Log Change Notices: Not Supported 00:25:37.385 Controller Attributes 00:25:37.385 128-bit Host Identifier: Supported 00:25:37.385 Non-Operational Permissive Mode: Not Supported 00:25:37.385 NVM Sets: Not Supported 00:25:37.385 Read Recovery Levels: Not Supported 00:25:37.385 Endurance Groups: Not Supported 00:25:37.385 Predictable Latency Mode: Not Supported 00:25:37.385 Traffic Based Keep ALive: Supported 00:25:37.385 Namespace Granularity: Not Supported 00:25:37.385 SQ Associations: Not Supported 00:25:37.385 UUID List: Not Supported 00:25:37.385 Multi-Domain Subsystem: Not Supported 00:25:37.385 Fixed Capacity Management: Not Supported 00:25:37.385 Variable Capacity Management: Not Supported 00:25:37.385 Delete Endurance Group: Not Supported 00:25:37.385 Delete NVM Set: Not Supported 00:25:37.385 Extended LBA Formats Supported: Not Supported 00:25:37.385 Flexible Data Placement Supported: Not Supported 00:25:37.385 00:25:37.385 Controller Memory Buffer Support 00:25:37.385 ================================ 00:25:37.385 Supported: No 00:25:37.385 00:25:37.385 Persistent Memory Region Support 00:25:37.385 ================================ 00:25:37.385 Supported: No 00:25:37.385 00:25:37.385 Admin Command Set Attributes 00:25:37.385 ============================ 00:25:37.385 Security Send/Receive: Not Supported 00:25:37.385 Format NVM: Not Supported 00:25:37.385 Firmware Activate/Download: Not Supported 00:25:37.385 Namespace Management: Not Supported 00:25:37.385 Device Self-Test: Not Supported 00:25:37.385 Directives: Not Supported 00:25:37.385 NVMe-MI: Not Supported 00:25:37.385 Virtualization Management: Not Supported 00:25:37.385 Doorbell Buffer Config: Not Supported 00:25:37.385 Get LBA Status Capability: Not Supported 00:25:37.385 Command & Feature Lockdown Capability: Not Supported 00:25:37.385 Abort Command Limit: 4 00:25:37.385 Async Event Request Limit: 4 00:25:37.385 Number of Firmware Slots: N/A 00:25:37.385 Firmware Slot 1 Read-Only: N/A 00:25:37.385 Firmware Activation Without Reset: N/A 00:25:37.385 Multiple Update Detection Support: N/A 00:25:37.385 Firmware Update Granularity: No Information Provided 00:25:37.385 Per-Namespace SMART Log: Yes 00:25:37.385 Asymmetric Namespace Access Log Page: Supported 00:25:37.385 ANA Transition Time : 10 sec 00:25:37.385 00:25:37.385 Asymmetric Namespace Access Capabilities 00:25:37.385 ANA Optimized State : Supported 00:25:37.385 ANA Non-Optimized State : Supported 00:25:37.385 ANA Inaccessible State : Supported 00:25:37.385 ANA Persistent Loss State : Supported 00:25:37.385 ANA Change State : Supported 00:25:37.385 ANAGRPID is not changed : No 00:25:37.385 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:37.385 00:25:37.385 ANA Group Identifier Maximum : 128 00:25:37.385 Number of ANA Group Identifiers : 128 00:25:37.385 Max Number of Allowed Namespaces : 1024 00:25:37.385 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:37.385 Command Effects Log Page: Supported 00:25:37.385 Get Log Page Extended Data: Supported 00:25:37.385 Telemetry Log Pages: Not Supported 00:25:37.385 Persistent Event Log Pages: Not Supported 00:25:37.385 Supported Log Pages Log Page: May Support 00:25:37.385 Commands Supported & Effects Log Page: Not Supported 00:25:37.385 Feature Identifiers & Effects Log Page:May Support 00:25:37.385 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.385 Data Area 4 for Telemetry Log: Not Supported 00:25:37.385 Error Log Page Entries Supported: 128 00:25:37.385 Keep Alive: Supported 00:25:37.385 Keep Alive Granularity: 1000 ms 00:25:37.385 00:25:37.385 NVM Command Set Attributes 00:25:37.385 ========================== 00:25:37.385 Submission Queue Entry Size 00:25:37.385 Max: 64 00:25:37.385 Min: 64 00:25:37.385 Completion Queue Entry Size 00:25:37.385 Max: 16 00:25:37.385 Min: 16 00:25:37.385 Number of Namespaces: 1024 00:25:37.385 Compare Command: Not Supported 00:25:37.385 Write Uncorrectable Command: Not Supported 00:25:37.385 Dataset Management Command: Supported 00:25:37.385 Write Zeroes Command: Supported 00:25:37.385 Set Features Save Field: Not Supported 00:25:37.385 Reservations: Not Supported 00:25:37.385 Timestamp: Not Supported 00:25:37.385 Copy: Not Supported 00:25:37.385 Volatile Write Cache: Present 00:25:37.385 Atomic Write Unit (Normal): 1 00:25:37.385 Atomic Write Unit (PFail): 1 00:25:37.385 Atomic Compare & Write Unit: 1 00:25:37.385 Fused Compare & Write: Not Supported 00:25:37.385 Scatter-Gather List 00:25:37.385 SGL Command Set: Supported 00:25:37.385 SGL Keyed: Not Supported 00:25:37.385 SGL Bit Bucket Descriptor: Not Supported 00:25:37.385 SGL Metadata Pointer: Not Supported 00:25:37.385 Oversized SGL: Not Supported 00:25:37.385 SGL Metadata Address: Not Supported 00:25:37.385 SGL Offset: Supported 00:25:37.385 Transport SGL Data Block: Not Supported 00:25:37.385 Replay Protected Memory Block: Not Supported 00:25:37.385 00:25:37.385 Firmware Slot Information 00:25:37.385 ========================= 00:25:37.385 Active slot: 0 00:25:37.385 00:25:37.385 Asymmetric Namespace Access 00:25:37.385 =========================== 00:25:37.385 Change Count : 0 00:25:37.385 Number of ANA Group Descriptors : 1 00:25:37.385 ANA Group Descriptor : 0 00:25:37.385 ANA Group ID : 1 00:25:37.385 Number of NSID Values : 1 00:25:37.385 Change Count : 0 00:25:37.385 ANA State : 1 00:25:37.385 Namespace Identifier : 1 00:25:37.385 00:25:37.385 Commands Supported and Effects 00:25:37.385 ============================== 00:25:37.385 Admin Commands 00:25:37.385 -------------- 00:25:37.385 Get Log Page (02h): Supported 00:25:37.385 Identify (06h): Supported 00:25:37.385 Abort (08h): Supported 00:25:37.385 Set Features (09h): Supported 00:25:37.385 Get Features (0Ah): Supported 00:25:37.385 Asynchronous Event Request (0Ch): Supported 00:25:37.385 Keep Alive (18h): Supported 00:25:37.385 I/O Commands 00:25:37.385 ------------ 00:25:37.385 Flush (00h): Supported 00:25:37.385 Write (01h): Supported LBA-Change 00:25:37.385 Read (02h): Supported 00:25:37.385 Write Zeroes (08h): Supported LBA-Change 00:25:37.385 Dataset Management (09h): Supported 00:25:37.385 00:25:37.385 Error Log 00:25:37.385 ========= 00:25:37.385 Entry: 0 00:25:37.385 Error Count: 0x3 00:25:37.385 Submission Queue Id: 0x0 00:25:37.385 Command Id: 0x5 00:25:37.385 Phase Bit: 0 00:25:37.385 Status Code: 0x2 00:25:37.385 Status Code Type: 0x0 00:25:37.385 Do Not Retry: 1 00:25:37.385 Error Location: 0x28 00:25:37.385 LBA: 0x0 00:25:37.385 Namespace: 0x0 00:25:37.385 Vendor Log Page: 0x0 00:25:37.385 ----------- 00:25:37.385 Entry: 1 00:25:37.385 Error Count: 0x2 00:25:37.385 Submission Queue Id: 0x0 00:25:37.385 Command Id: 0x5 00:25:37.385 Phase Bit: 0 00:25:37.385 Status Code: 0x2 00:25:37.385 Status Code Type: 0x0 00:25:37.385 Do Not Retry: 1 00:25:37.385 Error Location: 0x28 00:25:37.385 LBA: 0x0 00:25:37.385 Namespace: 0x0 00:25:37.385 Vendor Log Page: 0x0 00:25:37.385 ----------- 00:25:37.385 Entry: 2 00:25:37.385 Error Count: 0x1 00:25:37.385 Submission Queue Id: 0x0 00:25:37.385 Command Id: 0x4 00:25:37.385 Phase Bit: 0 00:25:37.385 Status Code: 0x2 00:25:37.385 Status Code Type: 0x0 00:25:37.385 Do Not Retry: 1 00:25:37.385 Error Location: 0x28 00:25:37.385 LBA: 0x0 00:25:37.385 Namespace: 0x0 00:25:37.385 Vendor Log Page: 0x0 00:25:37.385 00:25:37.385 Number of Queues 00:25:37.385 ================ 00:25:37.385 Number of I/O Submission Queues: 128 00:25:37.385 Number of I/O Completion Queues: 128 00:25:37.385 00:25:37.385 ZNS Specific Controller Data 00:25:37.385 ============================ 00:25:37.386 Zone Append Size Limit: 0 00:25:37.386 00:25:37.386 00:25:37.386 Active Namespaces 00:25:37.386 ================= 00:25:37.386 get_feature(0x05) failed 00:25:37.386 Namespace ID:1 00:25:37.386 Command Set Identifier: NVM (00h) 00:25:37.386 Deallocate: Supported 00:25:37.386 Deallocated/Unwritten Error: Not Supported 00:25:37.386 Deallocated Read Value: Unknown 00:25:37.386 Deallocate in Write Zeroes: Not Supported 00:25:37.386 Deallocated Guard Field: 0xFFFF 00:25:37.386 Flush: Supported 00:25:37.386 Reservation: Not Supported 00:25:37.386 Namespace Sharing Capabilities: Multiple Controllers 00:25:37.386 Size (in LBAs): 1875385008 (894GiB) 00:25:37.386 Capacity (in LBAs): 1875385008 (894GiB) 00:25:37.386 Utilization (in LBAs): 1875385008 (894GiB) 00:25:37.386 UUID: 32323cca-286e-497d-805e-26eb6ec1c7df 00:25:37.386 Thin Provisioning: Not Supported 00:25:37.386 Per-NS Atomic Units: Yes 00:25:37.386 Atomic Write Unit (Normal): 8 00:25:37.386 Atomic Write Unit (PFail): 8 00:25:37.386 Preferred Write Granularity: 8 00:25:37.386 Atomic Compare & Write Unit: 8 00:25:37.386 Atomic Boundary Size (Normal): 0 00:25:37.386 Atomic Boundary Size (PFail): 0 00:25:37.386 Atomic Boundary Offset: 0 00:25:37.386 NGUID/EUI64 Never Reused: No 00:25:37.386 ANA group ID: 1 00:25:37.386 Namespace Write Protected: No 00:25:37.386 Number of LBA Formats: 1 00:25:37.386 Current LBA Format: LBA Format #00 00:25:37.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:37.386 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.386 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.386 rmmod nvme_tcp 00:25:37.386 rmmod nvme_fabrics 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.647 01:02:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:39.549 01:02:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:42.835 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:25:42.835 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:42.835 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:25:43.093 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:25:43.353 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:25:43.613 00:25:43.613 real 0m15.998s 00:25:43.613 user 0m3.634s 00:25:43.613 sys 0m7.883s 00:25:43.613 01:02:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:43.613 01:02:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.613 ************************************ 00:25:43.613 END TEST nvmf_identify_kernel_target 00:25:43.613 ************************************ 00:25:43.613 01:02:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:43.613 01:02:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:43.613 01:02:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:43.613 01:02:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:43.613 ************************************ 00:25:43.613 START TEST nvmf_auth 00:25:43.613 ************************************ 00:25:43.613 01:02:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:43.613 * Looking for test storage... 00:25:43.613 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:43.613 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.613 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.874 01:02:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:50.446 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:50.446 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:50.446 Found net devices under 0000:27:00.0: cvl_0_0 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:50.446 Found net devices under 0000:27:00.1: cvl_0_1 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.446 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:50.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:50.446 00:25:50.446 --- 10.0.0.2 ping statistics --- 00:25:50.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.447 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:25:50.447 00:25:50.447 --- 10.0.0.1 ping statistics --- 00:25:50.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.447 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=3605738 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 3605738 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3605738 ']' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:50.447 01:02:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=7a8f105700f9d6af8c6e15f98b539743 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.H3u 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 7a8f105700f9d6af8c6e15f98b539743 0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 7a8f105700f9d6af8c6e15f98b539743 0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=7a8f105700f9d6af8c6e15f98b539743 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.H3u 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.H3u 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.H3u 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=a61f3ced8c8c99fe21900eb2e8a4f205ece9033149cd7ecb1341a9521624260f 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.bNJ 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key a61f3ced8c8c99fe21900eb2e8a4f205ece9033149cd7ecb1341a9521624260f 3 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a61f3ced8c8c99fe21900eb2e8a4f205ece9033149cd7ecb1341a9521624260f 3 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a61f3ced8c8c99fe21900eb2e8a4f205ece9033149cd7ecb1341a9521624260f 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.bNJ 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.bNJ 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.bNJ 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=5541e4b71616a60738d825dd216dbb2d38d120359477aec0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.v12 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 5541e4b71616a60738d825dd216dbb2d38d120359477aec0 0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 5541e4b71616a60738d825dd216dbb2d38d120359477aec0 0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=5541e4b71616a60738d825dd216dbb2d38d120359477aec0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.v12 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.v12 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.v12 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=a5f29fe4d13ec90d6a125b67cdc77de146ba3845600695a0 00:25:50.708 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.IsD 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key a5f29fe4d13ec90d6a125b67cdc77de146ba3845600695a0 2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a5f29fe4d13ec90d6a125b67cdc77de146ba3845600695a0 2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a5f29fe4d13ec90d6a125b67cdc77de146ba3845600695a0 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.IsD 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.IsD 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.IsD 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=3cb4947571be4ce7ec8b8d5f15a26e85 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Zkx 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 3cb4947571be4ce7ec8b8d5f15a26e85 1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 3cb4947571be4ce7ec8b8d5f15a26e85 1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=3cb4947571be4ce7ec8b8d5f15a26e85 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Zkx 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Zkx 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.Zkx 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=5bd7071eaf74dfeebc59b6eec6aa157f 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.xdR 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 5bd7071eaf74dfeebc59b6eec6aa157f 1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 5bd7071eaf74dfeebc59b6eec6aa157f 1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=5bd7071eaf74dfeebc59b6eec6aa157f 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.xdR 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.xdR 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.xdR 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=6d6031678b00c0a21075c9bccf6df0323bb7eb12ccd2afa0 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.Idh 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 6d6031678b00c0a21075c9bccf6df0323bb7eb12ccd2afa0 2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 6d6031678b00c0a21075c9bccf6df0323bb7eb12ccd2afa0 2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=6d6031678b00c0a21075c9bccf6df0323bb7eb12ccd2afa0 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.Idh 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.Idh 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.Idh 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=97be80b7c23844263f250165065a61f3 00:25:50.970 01:02:37 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:25:50.970 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.SaI 00:25:50.970 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 97be80b7c23844263f250165065a61f3 0 00:25:50.970 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 97be80b7c23844263f250165065a61f3 0 00:25:50.970 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.970 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.971 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=97be80b7c23844263f250165065a61f3 00:25:50.971 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:25:50.971 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.SaI 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.SaI 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.SaI 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=52e6d9965d4768207d54d5ee5758b3dce98b5464d22b4f49e2c0a3a8ca8c3a8f 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.a49 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 52e6d9965d4768207d54d5ee5758b3dce98b5464d22b4f49e2c0a3a8ca8c3a8f 3 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 52e6d9965d4768207d54d5ee5758b3dce98b5464d22b4f49e2c0a3a8ca8c3a8f 3 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:25:51.230 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=52e6d9965d4768207d54d5ee5758b3dce98b5464d22b4f49e2c0a3a8ca8c3a8f 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.a49 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.a49 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.a49 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 3605738 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3605738 ']' 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H3u 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.bNJ ]] 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bNJ 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.v12 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.IsD ]] 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IsD 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.231 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Zkx 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.xdR ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xdR 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Idh 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.SaI ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.SaI 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.a49 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:51.490 01:02:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:25:54.031 Waiting for block devices as requested 00:25:54.031 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:25:54.031 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.288 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.288 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.288 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.288 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.288 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.548 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.548 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.548 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.548 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.809 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.809 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:25:54.809 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:54.809 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:25:55.069 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:25:55.069 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:25:55.069 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:56.007 No valid GPT data, bailing 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:56.007 No valid GPT data, bailing 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:25:56.007 00:25:56.007 Discovery Log Number of Records 2, Generation counter 2 00:25:56.007 =====Discovery Log Entry 0====== 00:25:56.007 trtype: tcp 00:25:56.007 adrfam: ipv4 00:25:56.007 subtype: current discovery subsystem 00:25:56.007 treq: not specified, sq flow control disable supported 00:25:56.007 portid: 1 00:25:56.007 trsvcid: 4420 00:25:56.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:56.007 traddr: 10.0.0.1 00:25:56.007 eflags: none 00:25:56.007 sectype: none 00:25:56.007 =====Discovery Log Entry 1====== 00:25:56.007 trtype: tcp 00:25:56.007 adrfam: ipv4 00:25:56.007 subtype: nvme subsystem 00:25:56.007 treq: not specified, sq flow control disable supported 00:25:56.007 portid: 1 00:25:56.007 trsvcid: 4420 00:25:56.007 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:56.007 traddr: 10.0.0.1 00:25:56.007 eflags: none 00:25:56.007 sectype: none 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:25:56.007 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.008 01:02:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.008 nvme0n1 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.008 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.266 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.267 nvme0n1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.267 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 nvme0n1 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 nvme0n1 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.527 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.788 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.789 nvme0n1 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.789 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.050 nvme0n1 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.050 01:02:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.050 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 nvme0n1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 nvme0n1 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.309 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 nvme0n1 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.568 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.827 nvme0n1 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.827 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.085 nvme0n1 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.085 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.086 01:02:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.086 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.346 nvme0n1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.346 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.607 nvme0n1 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.607 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.869 nvme0n1 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.869 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.129 nvme0n1 00:25:59.129 01:02:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.129 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.388 nvme0n1 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:59.388 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.389 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 nvme0n1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.679 01:02:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 nvme0n1 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.245 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.246 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 nvme0n1 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.504 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.505 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.078 nvme0n1 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.078 01:02:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 nvme0n1 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:01.337 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:01.338 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.338 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.338 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.903 nvme0n1 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.903 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.161 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.162 01:02:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.731 nvme0n1 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:02.731 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.732 01:02:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 nvme0n1 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.300 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 nvme0n1 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.865 01:02:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.432 nvme0n1 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.432 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 nvme0n1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.693 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.953 nvme0n1 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.953 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.954 nvme0n1 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:04.954 01:02:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:04.954 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 nvme0n1 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.213 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.472 nvme0n1 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.472 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.730 nvme0n1 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.730 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 nvme0n1 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.731 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.989 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 nvme0n1 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.990 01:02:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.990 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.249 nvme0n1 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.249 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.250 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.510 nvme0n1 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.510 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.771 nvme0n1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.771 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.030 nvme0n1 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:07.030 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.031 01:02:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.290 nvme0n1 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.290 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.549 nvme0n1 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.549 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.807 nvme0n1 00:26:07.807 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.807 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.807 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:07.807 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.807 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.808 01:02:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.067 nvme0n1 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.067 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.326 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 nvme0n1 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:08.844 nvme0n1 00:26:08.844 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.844 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.844 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:08.845 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.845 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.104 01:02:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.362 nvme0n1 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.362 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.363 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.929 nvme0n1 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.929 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.930 01:02:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:10.499 nvme0n1 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.499 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.500 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.068 nvme0n1 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:11.068 01:02:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:11.068 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.069 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.635 nvme0n1 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.635 01:02:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:12.203 nvme0n1 00:26:12.203 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.464 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 nvme0n1 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.034 01:02:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 nvme0n1 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.034 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 nvme0n1 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.293 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.294 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.552 nvme0n1 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.552 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.553 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.812 nvme0n1 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.812 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.813 nvme0n1 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.813 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.072 nvme0n1 00:26:14.073 01:03:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.073 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.073 01:03:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.073 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.334 nvme0n1 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.334 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 nvme0n1 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 nvme0n1 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.595 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.596 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 nvme0n1 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.856 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.857 01:03:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.116 nvme0n1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.116 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.375 nvme0n1 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.375 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.667 nvme0n1 00:26:15.667 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.667 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.667 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:15.667 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.667 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.668 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.927 nvme0n1 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.927 01:03:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.188 nvme0n1 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.188 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.760 nvme0n1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.760 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.019 nvme0n1 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.019 01:03:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.277 nvme0n1 00:26:17.278 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.278 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.536 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.794 nvme0n1 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.794 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.795 01:03:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.363 nvme0n1 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E4ZjEwNTcwMGY5ZDZhZjhjNmUxNWY5OGI1Mzk3NDMvqf+Q: 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YTYxZjNjZWQ4YzhjOTlmZTIxOTAwZWIyZThhNGYyMDVlY2U5MDMzMTQ5Y2Q3ZWNiMTM0MWE5NTIxNjI0MjYwZm4z0g0=: 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.363 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.932 nvme0n1 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:18.932 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.933 01:03:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:19.499 nvme0n1 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiNDk0NzU3MWJlNGNlN2VjOGI4ZDVmMTVhMjZlODViz9KL: 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NWJkNzA3MWVhZjc0ZGZlZWJjNTliNmVlYzZhYTE1N2aEDS9q: 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.499 01:03:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.066 nvme0n1 00:26:20.066 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.066 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.067 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2MDMxNjc4YjAwYzBhMjEwNzVjOWJjY2Y2ZGYwMzIzYmI3ZWIxMmNjZDJhZmEwU7XdWA==: 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:OTdiZTgwYjdjMjM4NDQyNjNmMjUwMTY1MDY1YTYxZjNr+lib: 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.327 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.895 nvme0n1 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:20.895 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTJlNmQ5OTY1ZDQ3NjgyMDdkNTRkNWVlNTc1OGIzZGNlOThiNTQ2NGQyMmI0ZjQ5ZTJjMGEzYThjYThjM2E4ZjCafLk=: 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 01:03:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.462 nvme0n1 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:26:21.462 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0MWU0YjcxNjE2YTYwNzM4ZDgyNWRkMjE2ZGJiMmQzOGQxMjAzNTk0NzdhZWMwYb0ATw==: 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YTVmMjlmZTRkMTNlYzkwZDZhMTI1YjY3Y2RjNzdkZTE0NmJhMzg0NTYwMDY5NWEwrVVERw==: 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.463 request: 00:26:21.463 { 00:26:21.463 "name": "nvme0", 00:26:21.463 "trtype": "tcp", 00:26:21.463 "traddr": "10.0.0.1", 00:26:21.463 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:21.463 "adrfam": "ipv4", 00:26:21.463 "trsvcid": "4420", 00:26:21.463 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:21.463 "method": "bdev_nvme_attach_controller", 00:26:21.463 "req_id": 1 00:26:21.463 } 00:26:21.463 Got JSON-RPC error response 00:26:21.463 response: 00:26:21.463 { 00:26:21.463 "code": -32602, 00:26:21.463 "message": "Invalid parameters" 00:26:21.463 } 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.463 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 request: 00:26:21.723 { 00:26:21.723 "name": "nvme0", 00:26:21.723 "trtype": "tcp", 00:26:21.723 "traddr": "10.0.0.1", 00:26:21.723 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:21.723 "adrfam": "ipv4", 00:26:21.723 "trsvcid": "4420", 00:26:21.723 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:21.723 "dhchap_key": "key2", 00:26:21.723 "method": "bdev_nvme_attach_controller", 00:26:21.723 "req_id": 1 00:26:21.723 } 00:26:21.723 Got JSON-RPC error response 00:26:21.723 response: 00:26:21.723 { 00:26:21.723 "code": -32602, 00:26:21.723 "message": "Invalid parameters" 00:26:21.723 } 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 request: 00:26:21.723 { 00:26:21.723 "name": "nvme0", 00:26:21.723 "trtype": "tcp", 00:26:21.723 "traddr": "10.0.0.1", 00:26:21.723 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:21.723 "adrfam": "ipv4", 00:26:21.723 "trsvcid": "4420", 00:26:21.723 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:21.723 "dhchap_key": "key1", 00:26:21.723 "dhchap_ctrlr_key": "ckey2", 00:26:21.723 "method": "bdev_nvme_attach_controller", 00:26:21.723 "req_id": 1 00:26:21.723 } 00:26:21.723 Got JSON-RPC error response 00:26:21.723 response: 00:26:21.723 { 00:26:21.723 "code": -32602, 00:26:21.723 "message": "Invalid parameters" 00:26:21.723 } 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.723 rmmod nvme_tcp 00:26:21.723 rmmod nvme_fabrics 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 3605738 ']' 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 3605738 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 3605738 ']' 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 3605738 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3605738 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3605738' 00:26:21.723 killing process with pid 3605738 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 3605738 00:26:21.723 01:03:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 3605738 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.292 01:03:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:24.196 01:03:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:26:27.529 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.529 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:26:27.529 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:26:27.530 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:26:28.097 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:26:28.097 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:26:28.356 01:03:15 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.H3u /tmp/spdk.key-null.v12 /tmp/spdk.key-sha256.Zkx /tmp/spdk.key-sha384.Idh /tmp/spdk.key-sha512.a49 /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:26:28.356 01:03:15 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:26:30.888 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:26:30.888 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:30.888 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:30.888 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:26:31.147 00:26:31.147 real 0m47.393s 00:26:31.147 user 0m39.883s 00:26:31.147 sys 0m11.958s 00:26:31.147 01:03:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:31.147 01:03:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:26:31.147 ************************************ 00:26:31.147 END TEST nvmf_auth 00:26:31.147 ************************************ 00:26:31.147 01:03:18 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:26:31.147 01:03:18 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:31.147 01:03:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:31.147 01:03:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:31.147 01:03:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.147 ************************************ 00:26:31.147 START TEST nvmf_digest 00:26:31.147 ************************************ 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:31.147 * Looking for test storage... 00:26:31.147 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.147 01:03:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.148 01:03:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:37.720 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:37.720 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:37.720 Found net devices under 0000:27:00.0: cvl_0_0 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:37.720 Found net devices under 0000:27:00.1: cvl_0_1 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:37.720 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:26:37.721 00:26:37.721 --- 10.0.0.2 ping statistics --- 00:26:37.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.721 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:26:37.721 00:26:37.721 --- 10.0.0.1 ping statistics --- 00:26:37.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.721 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.721 ************************************ 00:26:37.721 START TEST nvmf_digest_dsa_initiator 00:26:37.721 ************************************ 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1121 -- # run_digest dsa_initiator 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@120 -- # local dsa_initiator 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # dsa_initiator=true 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@481 -- # nvmfpid=3621019 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@482 -- # waitforlisten 3621019 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@827 -- # '[' -z 3621019 ']' 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.721 01:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:37.721 [2024-05-15 01:03:23.885581] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:26:37.721 [2024-05-15 01:03:23.885648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.721 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.721 [2024-05-15 01:03:23.974355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.721 [2024-05-15 01:03:24.066082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.721 [2024-05-15 01:03:24.066117] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.721 [2024-05-15 01:03:24.066126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.721 [2024-05-15 01:03:24.066136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.721 [2024-05-15 01:03:24.066143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.721 [2024-05-15 01:03:24.066169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@860 -- # return 0 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@126 -- # common_target_config 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@43 -- # rpc_cmd 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.721 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:37.981 null0 00:26:37.981 [2024-05-15 01:03:24.809033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.981 [2024-05-15 01:03:24.832954] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:37.981 [2024-05-15 01:03:24.833270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=3621250 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 3621250 /var/tmp/bperf.sock 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@827 -- # '[' -z 3621250 ']' 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:37.981 01:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:37.981 [2024-05-15 01:03:24.911412] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:26:37.981 [2024-05-15 01:03:24.911533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621250 ] 00:26:37.981 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.241 [2024-05-15 01:03:25.050933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.241 [2024-05-15 01:03:25.191070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@860 -- # return 0 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:38.808 [2024-05-15 01:03:25.735841] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:38.808 01:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.147 01:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.147 01:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.405 nvme0n1 00:26:44.405 01:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.405 01:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.662 Running I/O for 2 seconds... 00:26:46.570 00:26:46.570 Latency(us) 00:26:46.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.570 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:46.570 nvme0n1 : 2.00 19383.62 75.72 0.00 0.00 6595.07 2845.64 15659.65 00:26:46.570 =================================================================================================================== 00:26:46.570 Total : 19383.62 75.72 0.00 0.00 6595.07 2845.64 15659.65 00:26:46.570 0 00:26:46.570 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.570 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:46.570 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.570 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.570 | select(.opcode=="crc32c") 00:26:46.570 | "\(.module_name) \(.executed)"' 00:26:46.570 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 3621250 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@946 -- # '[' -z 3621250 ']' 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@950 -- # kill -0 3621250 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # uname 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621250 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621250' 00:26:46.829 killing process with pid 3621250 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # kill 3621250 00:26:46.829 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.829 00:26:46.829 Latency(us) 00:26:46.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.829 =================================================================================================================== 00:26:46.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.829 01:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@970 -- # wait 3621250 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=3623114 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 3623114 /var/tmp/bperf.sock 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@827 -- # '[' -z 3623114 ']' 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:48.211 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:48.211 [2024-05-15 01:03:35.166359] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:26:48.211 [2024-05-15 01:03:35.166483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623114 ] 00:26:48.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.211 Zero copy mechanism will not be used. 00:26:48.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.470 [2024-05-15 01:03:35.282100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.470 [2024-05-15 01:03:35.371997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.036 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:49.036 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@860 -- # return 0 00:26:49.036 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:49.036 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:49.036 01:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:49.036 [2024-05-15 01:03:35.996546] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:49.036 01:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:49.036 01:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:54.305 01:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.305 01:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.563 nvme0n1 00:26:54.563 01:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:54.563 01:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:54.563 Zero copy mechanism will not be used. 00:26:54.563 Running I/O for 2 seconds... 00:26:57.088 00:26:57.088 Latency(us) 00:26:57.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.088 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:57.088 nvme0n1 : 2.00 6927.18 865.90 0.00 0.00 2306.88 461.34 4449.55 00:26:57.089 =================================================================================================================== 00:26:57.089 Total : 6927.18 865.90 0.00 0.00 2306.88 461.34 4449.55 00:26:57.089 0 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.089 | select(.opcode=="crc32c") 00:26:57.089 | "\(.module_name) \(.executed)"' 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 3623114 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@946 -- # '[' -z 3623114 ']' 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@950 -- # kill -0 3623114 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # uname 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3623114 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3623114' 00:26:57.089 killing process with pid 3623114 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # kill 3623114 00:26:57.089 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.089 00:26:57.089 Latency(us) 00:26:57.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.089 =================================================================================================================== 00:26:57.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.089 01:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@970 -- # wait 3623114 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=3625152 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 3625152 /var/tmp/bperf.sock 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@827 -- # '[' -z 3625152 ']' 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.467 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:26:58.467 [2024-05-15 01:03:45.237285] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:26:58.467 [2024-05-15 01:03:45.237430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625152 ] 00:26:58.467 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.467 [2024-05-15 01:03:45.367774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.467 [2024-05-15 01:03:45.459055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.036 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.036 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@860 -- # return 0 00:26:59.036 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:26:59.036 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:59.036 01:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:59.036 [2024-05-15 01:03:46.075642] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:59.036 01:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:59.036 01:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:04.308 01:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.308 01:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.567 nvme0n1 00:27:04.567 01:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:04.567 01:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.827 Running I/O for 2 seconds... 00:27:06.733 00:27:06.733 Latency(us) 00:27:06.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:06.733 nvme0n1 : 2.00 26859.60 104.92 0.00 0.00 4756.84 2017.82 7174.47 00:27:06.733 =================================================================================================================== 00:27:06.733 Total : 26859.60 104.92 0.00 0.00 4756.84 2017.82 7174.47 00:27:06.733 0 00:27:06.733 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:06.733 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:27:06.733 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:06.733 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:06.733 | select(.opcode=="crc32c") 00:27:06.733 | "\(.module_name) \(.executed)"' 00:27:06.733 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 3625152 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@946 -- # '[' -z 3625152 ']' 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@950 -- # kill -0 3625152 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # uname 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3625152 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3625152' 00:27:06.992 killing process with pid 3625152 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # kill 3625152 00:27:06.992 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.992 00:27:06.992 Latency(us) 00:27:06.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.992 =================================================================================================================== 00:27:06.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.992 01:03:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@970 -- # wait 3625152 00:27:08.416 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=3626976 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 3626976 /var/tmp/bperf.sock 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@827 -- # '[' -z 3626976 ']' 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:27:08.417 01:03:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:08.417 [2024-05-15 01:03:55.360746] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:08.417 [2024-05-15 01:03:55.360863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626976 ] 00:27:08.417 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.417 Zero copy mechanism will not be used. 00:27:08.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.417 [2024-05-15 01:03:55.473770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.676 [2024-05-15 01:03:55.564296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@860 -- # return 0 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:27:09.246 [2024-05-15 01:03:56.216831] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:09.246 01:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:14.523 01:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.523 01:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.784 nvme0n1 00:27:14.784 01:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:14.784 01:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.784 Zero copy mechanism will not be used. 00:27:14.784 Running I/O for 2 seconds... 00:27:17.319 00:27:17.319 Latency(us) 00:27:17.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.319 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:17.319 nvme0n1 : 2.00 6607.18 825.90 0.00 0.00 2417.33 1172.75 6070.70 00:27:17.319 =================================================================================================================== 00:27:17.319 Total : 6607.18 825.90 0.00 0.00 2417.33 1172.75 6070.70 00:27:17.319 0 00:27:17.319 01:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:17.319 01:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:27:17.319 01:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:17.319 01:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:17.319 | select(.opcode=="crc32c") 00:27:17.319 | "\(.module_name) \(.executed)"' 00:27:17.319 01:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 3626976 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@946 -- # '[' -z 3626976 ']' 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@950 -- # kill -0 3626976 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # uname 00:27:17.319 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3626976 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3626976' 00:27:17.320 killing process with pid 3626976 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # kill 3626976 00:27:17.320 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.320 00:27:17.320 Latency(us) 00:27:17.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.320 =================================================================================================================== 00:27:17.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.320 01:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@970 -- # wait 3626976 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@132 -- # killprocess 3621019 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@946 -- # '[' -z 3621019 ']' 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@950 -- # kill -0 3621019 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # uname 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621019 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:18.696 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621019' 00:27:18.696 killing process with pid 3621019 00:27:18.697 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # kill 3621019 00:27:18.697 [2024-05-15 01:04:05.529885] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:18.697 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@970 -- # wait 3621019 00:27:18.955 00:27:18.955 real 0m42.158s 00:27:18.955 user 1m2.648s 00:27:18.955 sys 0m3.856s 00:27:18.955 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:18.955 01:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:27:18.955 ************************************ 00:27:18.955 END TEST nvmf_digest_dsa_initiator 00:27:18.955 ************************************ 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.214 ************************************ 00:27:19.214 START TEST nvmf_digest_dsa_target 00:27:19.214 ************************************ 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1121 -- # run_digest dsa_target 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@120 -- # local dsa_initiator 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # dsa_initiator=false 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@481 -- # nvmfpid=3629072 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@482 -- # waitforlisten 3629072 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@827 -- # '[' -z 3629072 ']' 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.214 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:19.215 [2024-05-15 01:04:06.145819] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:19.215 [2024-05-15 01:04:06.145926] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.215 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.215 [2024-05-15 01:04:06.268622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.472 [2024-05-15 01:04:06.360477] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.472 [2024-05-15 01:04:06.360518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.472 [2024-05-15 01:04:06.360533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.472 [2024-05-15 01:04:06.360542] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.472 [2024-05-15 01:04:06.360550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.472 [2024-05-15 01:04:06.360576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@860 -- # return 0 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.039 [2024-05-15 01:04:06.861062] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@126 -- # common_target_config 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@43 -- # rpc_cmd 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.039 01:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.319 null0 00:27:25.319 [2024-05-15 01:04:11.980701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.319 [2024-05-15 01:04:12.007351] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:25.319 [2024-05-15 01:04:12.007672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=3630262 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 3630262 /var/tmp/bperf.sock 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@827 -- # '[' -z 3630262 ']' 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.319 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:25.319 [2024-05-15 01:04:12.062122] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:25.320 [2024-05-15 01:04:12.062198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630262 ] 00:27:25.320 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.320 [2024-05-15 01:04:12.150671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.320 [2024-05-15 01:04:12.240922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.887 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.887 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@860 -- # return 0 00:27:25.887 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:27:25.887 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.887 01:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:26.145 01:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.145 01:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.403 nvme0n1 00:27:26.403 01:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:26.403 01:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.403 Running I/O for 2 seconds... 00:27:28.309 00:27:28.309 Latency(us) 00:27:28.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.309 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:28.309 nvme0n1 : 2.00 22912.09 89.50 0.00 0.00 5581.08 2293.76 16073.57 00:27:28.309 =================================================================================================================== 00:27:28.309 Total : 22912.09 89.50 0.00 0.00 5581.08 2293.76 16073.57 00:27:28.309 0 00:27:28.309 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:28.309 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:27:28.309 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:28.309 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:28.309 | select(.opcode=="crc32c") 00:27:28.309 | "\(.module_name) \(.executed)"' 00:27:28.309 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 3630262 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@946 -- # '[' -z 3630262 ']' 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@950 -- # kill -0 3630262 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # uname 00:27:28.567 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3630262 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3630262' 00:27:28.568 killing process with pid 3630262 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # kill 3630262 00:27:28.568 Received shutdown signal, test time was about 2.000000 seconds 00:27:28.568 00:27:28.568 Latency(us) 00:27:28.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.568 =================================================================================================================== 00:27:28.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.568 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@970 -- # wait 3630262 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=3630959 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 3630959 /var/tmp/bperf.sock 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@827 -- # '[' -z 3630959 ']' 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.135 01:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:29.135 [2024-05-15 01:04:15.979984] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:29.135 [2024-05-15 01:04:15.980152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630959 ] 00:27:29.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.135 Zero copy mechanism will not be used. 00:27:29.135 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.135 [2024-05-15 01:04:16.113599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.395 [2024-05-15 01:04:16.208067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.653 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.653 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@860 -- # return 0 00:27:29.653 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:27:29.653 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:29.653 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:29.911 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.911 01:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.477 nvme0n1 00:27:30.477 01:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:30.477 01:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.477 Zero copy mechanism will not be used. 00:27:30.477 Running I/O for 2 seconds... 00:27:32.384 00:27:32.384 Latency(us) 00:27:32.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:32.384 nvme0n1 : 2.00 7732.58 966.57 0.00 0.00 2066.39 396.67 4139.12 00:27:32.384 =================================================================================================================== 00:27:32.384 Total : 7732.58 966.57 0.00 0.00 2066.39 396.67 4139.12 00:27:32.384 0 00:27:32.384 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:32.384 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:27:32.384 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:32.384 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:32.384 | select(.opcode=="crc32c") 00:27:32.384 | "\(.module_name) \(.executed)"' 00:27:32.384 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 3630959 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@946 -- # '[' -z 3630959 ']' 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@950 -- # kill -0 3630959 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # uname 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3630959 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3630959' 00:27:32.644 killing process with pid 3630959 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # kill 3630959 00:27:32.644 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.644 00:27:32.644 Latency(us) 00:27:32.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.644 =================================================================================================================== 00:27:32.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.644 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@970 -- # wait 3630959 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=3631765 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 3631765 /var/tmp/bperf.sock 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@827 -- # '[' -z 3631765 ']' 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.957 01:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:32.957 [2024-05-15 01:04:19.933680] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:32.957 [2024-05-15 01:04:19.933779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631765 ] 00:27:33.235 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.235 [2024-05-15 01:04:20.026921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.235 [2024-05-15 01:04:20.129753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.803 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.803 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@860 -- # return 0 00:27:33.803 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:27:33.803 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:33.803 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:34.061 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.061 01:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.319 nvme0n1 00:27:34.319 01:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:34.319 01:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.319 Running I/O for 2 seconds... 00:27:36.234 00:27:36.234 Latency(us) 00:27:36.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.234 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.234 nvme0n1 : 2.00 25547.36 99.79 0.00 0.00 5001.19 2043.69 8347.22 00:27:36.234 =================================================================================================================== 00:27:36.234 Total : 25547.36 99.79 0.00 0.00 5001.19 2043.69 8347.22 00:27:36.234 0 00:27:36.234 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:36.234 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:27:36.234 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:36.234 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:36.234 | select(.opcode=="crc32c") 00:27:36.234 | "\(.module_name) \(.executed)"' 00:27:36.234 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 3631765 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@946 -- # '[' -z 3631765 ']' 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@950 -- # kill -0 3631765 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # uname 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3631765 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3631765' 00:27:36.493 killing process with pid 3631765 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # kill 3631765 00:27:36.493 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.493 00:27:36.493 Latency(us) 00:27:36.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.493 =================================================================================================================== 00:27:36.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.493 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@970 -- # wait 3631765 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=3632526 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 3632526 /var/tmp/bperf.sock 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@827 -- # '[' -z 3632526 ']' 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.752 01:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:37.012 [2024-05-15 01:04:23.884190] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:37.012 [2024-05-15 01:04:23.884337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632526 ] 00:27:37.012 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.012 Zero copy mechanism will not be used. 00:27:37.012 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.012 [2024-05-15 01:04:24.015828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.272 [2024-05-15 01:04:24.107268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@860 -- # return 0 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.840 01:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.098 nvme0n1 00:27:38.098 01:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:38.098 01:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.356 Zero copy mechanism will not be used. 00:27:38.356 Running I/O for 2 seconds... 00:27:40.264 00:27:40.264 Latency(us) 00:27:40.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.264 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:40.264 nvme0n1 : 2.00 7745.18 968.15 0.00 0.00 2061.70 1319.34 9175.04 00:27:40.264 =================================================================================================================== 00:27:40.264 Total : 7745.18 968.15 0.00 0.00 2061.70 1319.34 9175.04 00:27:40.264 0 00:27:40.264 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:40.264 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:27:40.264 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:40.264 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:40.264 | select(.opcode=="crc32c") 00:27:40.264 | "\(.module_name) \(.executed)"' 00:27:40.264 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 3632526 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@946 -- # '[' -z 3632526 ']' 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@950 -- # kill -0 3632526 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # uname 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3632526 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3632526' 00:27:40.524 killing process with pid 3632526 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # kill 3632526 00:27:40.524 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.524 00:27:40.524 Latency(us) 00:27:40.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.524 =================================================================================================================== 00:27:40.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.524 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@970 -- # wait 3632526 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@132 -- # killprocess 3629072 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@946 -- # '[' -z 3629072 ']' 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@950 -- # kill -0 3629072 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # uname 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3629072 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3629072' 00:27:40.785 killing process with pid 3629072 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # kill 3629072 00:27:40.785 01:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@970 -- # wait 3629072 00:27:40.785 [2024-05-15 01:04:27.770561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:42.690 00:27:42.690 real 0m23.225s 00:27:42.690 user 0m33.639s 00:27:42.690 sys 0m3.643s 00:27:42.690 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.690 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.691 ************************************ 00:27:42.691 END TEST nvmf_digest_dsa_target 00:27:42.691 ************************************ 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.691 ************************************ 00:27:42.691 START TEST nvmf_digest_error 00:27:42.691 ************************************ 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3633589 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3633589 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3633589 ']' 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.691 01:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:42.691 [2024-05-15 01:04:29.408473] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:42.691 [2024-05-15 01:04:29.408548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.691 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.691 [2024-05-15 01:04:29.497865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.691 [2024-05-15 01:04:29.589609] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.691 [2024-05-15 01:04:29.589645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.691 [2024-05-15 01:04:29.589655] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.691 [2024-05-15 01:04:29.589664] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.691 [2024-05-15 01:04:29.589672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.691 [2024-05-15 01:04:29.589704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.263 [2024-05-15 01:04:30.158193] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.263 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.263 null0 00:27:43.263 [2024-05-15 01:04:30.317249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.523 [2024-05-15 01:04:30.341168] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:43.523 [2024-05-15 01:04:30.341471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3633894 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3633894 /var/tmp/bperf.sock 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3633894 ']' 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.523 01:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:43.523 [2024-05-15 01:04:30.395855] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:43.523 [2024-05-15 01:04:30.395930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633894 ] 00:27:43.523 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.523 [2024-05-15 01:04:30.486482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.523 [2024-05-15 01:04:30.577915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.089 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.089 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:44.089 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.089 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.348 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.608 nvme0n1 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:44.608 01:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.608 Running I/O for 2 seconds... 00:27:44.608 [2024-05-15 01:04:31.521057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.521106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.521127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.532589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.532625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.532637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.541541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.541573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.541585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.553116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.553146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.564550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.564587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.572764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.572792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.572803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.608 [2024-05-15 01:04:31.582630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.608 [2024-05-15 01:04:31.582656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.608 [2024-05-15 01:04:31.582666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.591278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.591306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.591315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.603147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.603181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.603193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.613839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.613868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.613878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.622158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.622184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.622194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.635476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.635504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.635514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.644004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.644032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.644042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.655813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.655849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.655859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.609 [2024-05-15 01:04:31.664891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.609 [2024-05-15 01:04:31.664921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.609 [2024-05-15 01:04:31.664932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.675117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.877 [2024-05-15 01:04:31.675144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.877 [2024-05-15 01:04:31.675154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.683845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.877 [2024-05-15 01:04:31.683871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.877 [2024-05-15 01:04:31.683881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.693448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.877 [2024-05-15 01:04:31.693474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.877 [2024-05-15 01:04:31.693489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.703062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.877 [2024-05-15 01:04:31.703088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.877 [2024-05-15 01:04:31.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.711794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.877 [2024-05-15 01:04:31.711820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.877 [2024-05-15 01:04:31.711830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.877 [2024-05-15 01:04:31.724712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.724738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.724747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.732951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.732976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.732986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.744936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.744962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.744971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.757958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.757986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.757996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.766088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.766114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.766123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.777577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.777603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.777613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.788925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.788958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.788969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.797994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.810318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.810346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.810355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.821281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.821331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.830494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.830519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.830529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.839482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.839506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.839516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.850607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.850633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.850643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.859691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.859724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.859735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.871509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.871551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.879149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.878 [2024-05-15 01:04:31.879173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.878 [2024-05-15 01:04:31.879183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.878 [2024-05-15 01:04:31.890815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.879 [2024-05-15 01:04:31.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.879 [2024-05-15 01:04:31.890853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.879 [2024-05-15 01:04:31.899132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.879 [2024-05-15 01:04:31.899159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.879 [2024-05-15 01:04:31.899169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.879 [2024-05-15 01:04:31.910808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.879 [2024-05-15 01:04:31.910835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.879 [2024-05-15 01:04:31.910846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.879 [2024-05-15 01:04:31.922933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.879 [2024-05-15 01:04:31.922960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.879 [2024-05-15 01:04:31.922970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.879 [2024-05-15 01:04:31.930786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:44.879 [2024-05-15 01:04:31.930811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.879 [2024-05-15 01:04:31.930821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.942633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.942662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.942672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.953674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.953699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.962685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.962711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.962721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.974381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.974407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.974417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.982694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.982720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.982730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:31.994862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:31.994891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:31.994901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.007505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.007532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.007542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.018330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.018356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.027006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.027030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.027040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.038826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.038852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.049317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.049343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.057485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.057511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.057522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.070173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.070211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.081234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.081259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.081269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.089722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.089748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.141 [2024-05-15 01:04:32.089758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.141 [2024-05-15 01:04:32.100314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.141 [2024-05-15 01:04:32.100340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.100350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.110216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.110241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.110251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.118685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.118710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.118721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.129618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.129643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.129653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.140240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.140265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.140275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.150024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.150051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.150060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.159785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.159810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.159819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.169606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.169638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.169650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.179390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.179418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.179428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.187534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.187560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.187570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.142 [2024-05-15 01:04:32.198706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.142 [2024-05-15 01:04:32.198731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.142 [2024-05-15 01:04:32.198741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.209954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.209979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.209989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.218400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.218443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.229552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.229578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.229587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.238730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.238755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.238765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.250029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.250058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.250068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.259344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.259368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.259378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.269799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.269824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.269834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.279001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.279025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.287469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.287495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.287506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.297676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.297703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.297713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.307532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.307557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.307567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.316600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.316625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.316635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.325881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.325908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.325919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.335152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.335177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.335186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.344407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.344433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.344443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.353672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.353697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.353706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.362935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.362960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.372836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.372868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.372879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.382109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.382135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.382150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.391401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.391429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.391440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.400683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.400709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.400718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.409967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.404 [2024-05-15 01:04:32.409991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.404 [2024-05-15 01:04:32.410001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.404 [2024-05-15 01:04:32.419238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.419263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.419272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.405 [2024-05-15 01:04:32.429147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.429173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.429183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.405 [2024-05-15 01:04:32.438422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.438457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.405 [2024-05-15 01:04:32.447680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.447704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.447714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.405 [2024-05-15 01:04:32.456942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.456966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.456975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.405 [2024-05-15 01:04:32.466238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.405 [2024-05-15 01:04:32.466267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.405 [2024-05-15 01:04:32.466277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.475524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.475552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.475561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.484776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.484800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.484810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.494036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.494068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.494078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.503314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.503340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.503358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.512572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.512599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.512609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.664 [2024-05-15 01:04:32.521827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.664 [2024-05-15 01:04:32.521852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.664 [2024-05-15 01:04:32.521862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.531108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.531133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.531143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.540373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.540398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.540412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.549638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.549663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.549673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.558919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.558944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.558954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.568191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.568215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.568225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.578422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.587598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.587626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.587639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.597430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.597458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.597468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.606832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.606860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.606871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.615279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.615315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.627239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.627274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.627284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.636560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.636587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.636597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.647917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.647953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.657029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.657057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.657067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.669486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.669515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.669527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.680852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.680879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.691987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.692015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.692025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.699977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.700011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.711312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.711340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.711355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.665 [2024-05-15 01:04:32.719301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.665 [2024-05-15 01:04:32.719329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.665 [2024-05-15 01:04:32.719339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.731200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.731226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.731237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.739212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.739235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.739245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.750624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.750648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.750657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.759467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.759495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.759505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.768173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.768198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.768208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.777585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.777609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.777619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.787312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.787337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.787346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.796489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.796520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.805738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.805762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.805773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.815255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.815280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.815290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.824511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.824535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.824544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.833764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.833789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.833799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.842999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.843026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.843037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.852214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.852238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.852248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.861397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.861422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.861431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.871328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.871352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.871366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.881084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.881108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.881119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.890952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.890988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.891007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.901532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.901559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.901569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.911686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.911711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.911721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.921053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.921090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.921102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.932315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.932351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.944591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.944616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.944626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.955748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.924 [2024-05-15 01:04:32.955773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.924 [2024-05-15 01:04:32.955782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.924 [2024-05-15 01:04:32.964388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.925 [2024-05-15 01:04:32.964419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.925 [2024-05-15 01:04:32.964429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.925 [2024-05-15 01:04:32.977394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:45.925 [2024-05-15 01:04:32.977420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.925 [2024-05-15 01:04:32.977429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:32.989386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:32.989415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:32.989426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:32.998974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:32.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:32.999011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.007631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.007656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.007666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.019666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.019691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.019700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.032407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.044597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.044623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.044633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.056250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.056278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.056289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.067223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.067267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.077865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.077891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.077901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.086503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.086537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.096614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.096641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.096651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.107863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.107901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.184 [2024-05-15 01:04:33.117539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.184 [2024-05-15 01:04:33.117564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.184 [2024-05-15 01:04:33.117575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.128821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.128855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.138816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.138841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.138851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.148916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.148950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.148962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.162793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.162821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.162832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.172749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.172775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.181396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.181421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.181431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.192594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.192624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.192635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.204437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.204465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.204475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.213492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.213518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.213527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.225324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.225356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.225367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.233688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.233714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.233725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.185 [2024-05-15 01:04:33.245062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.185 [2024-05-15 01:04:33.245089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.185 [2024-05-15 01:04:33.245099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.257593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.257621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.269064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.269092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.269101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.279059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.279089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.279101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.290928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.290954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.290964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.301318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.301345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.301354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.309760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.309785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.309796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.320882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.320908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.320918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.330241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.330272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.330282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.341216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.341240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.341250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.352197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.352220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.352230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.360730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.360754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.360764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.369922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.369948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.369958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.379909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.379934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.379944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.389010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.389035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.398176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.398200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.398211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.406364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.418434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.418461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.418471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.431087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.431123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.441628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.441652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.441662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.451662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.451687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.451697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.460002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.460026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.460036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.471985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.472011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.472021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.481154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.481179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.481189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.490297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.490324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.490334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.444 [2024-05-15 01:04:33.501688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:46.444 [2024-05-15 01:04:33.501718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.444 [2024-05-15 01:04:33.501729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.704 00:27:46.704 Latency(us) 00:27:46.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:46.704 nvme0n1 : 2.05 24660.07 96.33 0.00 0.00 5086.61 2569.70 44702.45 00:27:46.704 =================================================================================================================== 00:27:46.704 Total : 24660.07 96.33 0.00 0.00 5086.61 2569.70 44702.45 00:27:46.704 0 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:46.704 | .driver_specific 00:27:46.704 | .nvme_error 00:27:46.704 | .status_code 00:27:46.704 | .command_transient_transport_error' 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3633894 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3633894 ']' 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3633894 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:46.704 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3633894 00:27:46.705 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:46.705 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:46.705 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3633894' 00:27:46.705 killing process with pid 3633894 00:27:46.705 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3633894 00:27:46.705 Received shutdown signal, test time was about 2.000000 seconds 00:27:46.705 00:27:46.705 Latency(us) 00:27:46.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.705 =================================================================================================================== 00:27:46.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:46.705 01:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3633894 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3634509 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3634509 /var/tmp/bperf.sock 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3634509 ']' 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.277 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:47.277 [2024-05-15 01:04:34.188869] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:47.277 [2024-05-15 01:04:34.189010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634509 ] 00:27:47.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.277 Zero copy mechanism will not be used. 00:27:47.277 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.277 [2024-05-15 01:04:34.318739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.536 [2024-05-15 01:04:34.411802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.102 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.102 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:48.102 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.102 01:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.102 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.362 nvme0n1 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:48.362 01:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.362 Zero copy mechanism will not be used. 00:27:48.362 Running I/O for 2 seconds... 00:27:48.362 [2024-05-15 01:04:35.412707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.362 [2024-05-15 01:04:35.412764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.362 [2024-05-15 01:04:35.412780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.362 [2024-05-15 01:04:35.418029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.362 [2024-05-15 01:04:35.418074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.362 [2024-05-15 01:04:35.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.362 [2024-05-15 01:04:35.422312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.362 [2024-05-15 01:04:35.422339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.362 [2024-05-15 01:04:35.422350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.362 [2024-05-15 01:04:35.425632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.362 [2024-05-15 01:04:35.425658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.362 [2024-05-15 01:04:35.425669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.429531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.429556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.429568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.434093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.434117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.434128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.438811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.438835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.438845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.444130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.444154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.444164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.448941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.448965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.454799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.454822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.454832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.459727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.622 [2024-05-15 01:04:35.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.622 [2024-05-15 01:04:35.459760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.622 [2024-05-15 01:04:35.464612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.464635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.469912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.469936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.469945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.476860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.476899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.476910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.482892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.482917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.482928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.490338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.490366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.490377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.497225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.497250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.497260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.504291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.504318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.504328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.511831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.511855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.511865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.517154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.517176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.517186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.521318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.521343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.521353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.525519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.525543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.525553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.528006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.528040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.530888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.530921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.534539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.534563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.534573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.538321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.538344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.538354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.542528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.542551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.542561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.546484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.546507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.546517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.551899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.551923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.551933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.555786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.555809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.555819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.559673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.559697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.563472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.563505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.566932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.566954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.566965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.570640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.570662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.570672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.574581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.574603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.574616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.579323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.579347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.579357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.584395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.584417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.584427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.588793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.588816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.588826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.592283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.592311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.592323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.595736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.595764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.595774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.599236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.623 [2024-05-15 01:04:35.599260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.623 [2024-05-15 01:04:35.599269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.623 [2024-05-15 01:04:35.602753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.602787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.606251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.606274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.606292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.609896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.609920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.609930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.613505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.613530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.613540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.617464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.617489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.617499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.621870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.621894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.621904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.625709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.625733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.625743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.629580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.629604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.629614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.633423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.633445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.633454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.637197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.637220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.637229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.640980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.641004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.641017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.644746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.644770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.644779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.648534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.648557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.648566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.653123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.653154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.657252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.657275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.657284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.661123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.661147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.661156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.664909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.664932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.664941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.670208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.670231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.670241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.674377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.674399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.674407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.678213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.678236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.678245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.624 [2024-05-15 01:04:35.682673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.624 [2024-05-15 01:04:35.682698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.624 [2024-05-15 01:04:35.682707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.687596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.687619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.687629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.692454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.692480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.692490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.696230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.696253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.700669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.700695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.700704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.705121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.705144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.705153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.710518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.710541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.710550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.713258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.713280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.715878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.715900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.715909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.719185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.719207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.719216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.722731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.722758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.726258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.726282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.726292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.730041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.730067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.730077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.734405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.734428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.734438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.738895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.738919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.738929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.744198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.744221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.744230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.747990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.751542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.751565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.751574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.755386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.893 [2024-05-15 01:04:35.755408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.893 [2024-05-15 01:04:35.755417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.893 [2024-05-15 01:04:35.759117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.759141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.759150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.762186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.762209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.762219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.764467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.764488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.764497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.768661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.768682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.768691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.773700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.773722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.773731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.777499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.777521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.781315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.781337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.781347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.785077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.785099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.785108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.788774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.788795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.788805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.793621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.793645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.793654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.797280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.797302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.797312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.801060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.801081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.804874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.804903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.804913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.808617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.808641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.808650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.812422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.812449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.812459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.816694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.816716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.816725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.821715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.821739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.821749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.826596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.826619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.833343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.833366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.833383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.838124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.838147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.838155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.842786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.842809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.842818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.847004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.847026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.847035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.851672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.851695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.851709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.856150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.856173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.856182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.859793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.859816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.859826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.863613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.863635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.863644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.867408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.867432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.867441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.871668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.871692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.871701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.875092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.875114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.875124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.878649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.878671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.878680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.882226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.882249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.882258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.885840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.885867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.885876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.890727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.890752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.890761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.896316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.896340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.896350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.902399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.902432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.906889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.906912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.906921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.911267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.911294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.894 [2024-05-15 01:04:35.911304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.894 [2024-05-15 01:04:35.914857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.894 [2024-05-15 01:04:35.914881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.918463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.918488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.918497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.921995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.925524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.925547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.925556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.929069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.929101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.932754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.932777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.932786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.936808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.936831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.936840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.941349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.941373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.941383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.945170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.945193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.945202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.948986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.949010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.895 [2024-05-15 01:04:35.952624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:48.895 [2024-05-15 01:04:35.952647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.895 [2024-05-15 01:04:35.952656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.956935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.956963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.956972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.960863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.960888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.960899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.965559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.965583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.965592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.969472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.969497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.969508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.973264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.973288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.973297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.977084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.977109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.977118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.980913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.980937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.980945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.985013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.985041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.985057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.989040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.989077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.989089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.993015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.993050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.993060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:35.997261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:35.997287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:35.997297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.002592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.002619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.002630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.006368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.006393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.006404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.010527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.010559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.010569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.015586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.015618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.015629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.022508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.022537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.022548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.027772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.027798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.027808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.032364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.032393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.032403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.036964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.036989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.042611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.042637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.042647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.046200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.046226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.046235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.049878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.049903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.049912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.055206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.055231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.055240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.159 [2024-05-15 01:04:36.057343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.159 [2024-05-15 01:04:36.057366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.159 [2024-05-15 01:04:36.057376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.060665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.060691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.060701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.064166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.064191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.064201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.067897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.067923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.067933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.072143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.072168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.072177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.077250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.077273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.077283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.081042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.081074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.081083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.085213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.085238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.085247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.090227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.090254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.090264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.096081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.096105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.096115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.101884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.101909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.101919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.106544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.106569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.106582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.111297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.111322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.111331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.115898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.115924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.115934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.119530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.119555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.119565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.123078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.123104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.123113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.126622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.126647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.126656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.130097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.130120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.133860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.133885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.133894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.138246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.138271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.138280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.143299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.143326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.143336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.149904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.149930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.149939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.156056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.156082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.156091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.163787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.163812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.163822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.169657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.169682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.169691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.173643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.173668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.173677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.177484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.177517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.181318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.160 [2024-05-15 01:04:36.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.160 [2024-05-15 01:04:36.181352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.160 [2024-05-15 01:04:36.185151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.185176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.185189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.189053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.189077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.189087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.192855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.192882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.192895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.196229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.196255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.196265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.199089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.199126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.203134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.203162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.203172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.207619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.207646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.207656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.211480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.211505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.211514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.215874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.215899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.215908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.161 [2024-05-15 01:04:36.219718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.161 [2024-05-15 01:04:36.219745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.161 [2024-05-15 01:04:36.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.223551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.223577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.223586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.227338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.227363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.227372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.231135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.231160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.231169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.234668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.234700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.237719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.237743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.237752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.240539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.240572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.244610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.244636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.244645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.248852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.248877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.248891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.253381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.253405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.253414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.258178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.258204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.265129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.265158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.265170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.270991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.271016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.271026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.278661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.278695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.285470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.285495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.285505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.422 [2024-05-15 01:04:36.291935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.422 [2024-05-15 01:04:36.291962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.422 [2024-05-15 01:04:36.291971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.297185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.297211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.297221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.301371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.301395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.301405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.304959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.304985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.304994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.308301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.308325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.308334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.311592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.311617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.311626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.315092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.315115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.315124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.318948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.318973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.318982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.323637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.323672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.327826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.327851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.327860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.332910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.332934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.332948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.339685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.339711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.339720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.346559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.346585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.346594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.353607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.353632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.353641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.361325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.361350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.361360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.368157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.368182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.368192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.374951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.374977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.374986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.381651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.381686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.388494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.388519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.395408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.395438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.395448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.402295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.402320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.402329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.409270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.409294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.409303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.416091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.416115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.416125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.423080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.423106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.423125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.430032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.430064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.430074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.436744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.436769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.436778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.441356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.441385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.441396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.445685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.423 [2024-05-15 01:04:36.445711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-05-15 01:04:36.445725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.423 [2024-05-15 01:04:36.449485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.449509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.449519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.453408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.453433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.453443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.458054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.458079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.458089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.462042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.462080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.466384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.466408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.466418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.470792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.470817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.470827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.475174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.475207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.479531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.424 [2024-05-15 01:04:36.483955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.424 [2024-05-15 01:04:36.483985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-05-15 01:04:36.483996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.684 [2024-05-15 01:04:36.488166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.684 [2024-05-15 01:04:36.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.684 [2024-05-15 01:04:36.488200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.684 [2024-05-15 01:04:36.492508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.684 [2024-05-15 01:04:36.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.684 [2024-05-15 01:04:36.492543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.684 [2024-05-15 01:04:36.496823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.684 [2024-05-15 01:04:36.496848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.684 [2024-05-15 01:04:36.496857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.684 [2024-05-15 01:04:36.501164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.684 [2024-05-15 01:04:36.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.684 [2024-05-15 01:04:36.501199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.684 [2024-05-15 01:04:36.505011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.505049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.509204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.509238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.513220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.513245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.513255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.517676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.517700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.521544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.521578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.525013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.525051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.528710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.528734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.528744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.532254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.532279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.532289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.535743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.535766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.535775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.539511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.539534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.539543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.543307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.543331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.543340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.548155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.548180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.552282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.552317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.556413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.556436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.560947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.560970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.560980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.566624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.566647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.566656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.573647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.573672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.573681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.578646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.578678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.583853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.583878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.583887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.589023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.589052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.589061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.592214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.685 [2024-05-15 01:04:36.592239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.685 [2024-05-15 01:04:36.592249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.685 [2024-05-15 01:04:36.597580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.597603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.597613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.602357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.602380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.602390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.607153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.607177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.612054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.612078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.612087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.617163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.617194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.617203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.623946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.623974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.623983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.630860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.630886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.630896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.636134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.636166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.636177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.640388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.640420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.640432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.644136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.644161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.644171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.647956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.647981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.651837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.651862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.651871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.656127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.656152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.656162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.661906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.661941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.667785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.667810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.667819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.672504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.672529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.672539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.676402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.676426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.676435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.681165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.681189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.681198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.685030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.685061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.685071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.688684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.688708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.688719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.692221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.692248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.692259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.695656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.695681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.695692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.699212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.699237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.699250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.703450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.703475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.703486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.707978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.708002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.708011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.711847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.711873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.711888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.686 [2024-05-15 01:04:36.716326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.686 [2024-05-15 01:04:36.716352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.686 [2024-05-15 01:04:36.716363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.687 [2024-05-15 01:04:36.721359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.687 [2024-05-15 01:04:36.721384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.687 [2024-05-15 01:04:36.721393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.687 [2024-05-15 01:04:36.728220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.687 [2024-05-15 01:04:36.728245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.687 [2024-05-15 01:04:36.728255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.687 [2024-05-15 01:04:36.735075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.687 [2024-05-15 01:04:36.735100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.687 [2024-05-15 01:04:36.735110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.687 [2024-05-15 01:04:36.741887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.687 [2024-05-15 01:04:36.741914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.687 [2024-05-15 01:04:36.741924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.748743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.748770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.748780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.755682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.755707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.755718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.760878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.760902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.760913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.765028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.765056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.765066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.769105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.769128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.773548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.773573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.949 [2024-05-15 01:04:36.773583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.949 [2024-05-15 01:04:36.778806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.949 [2024-05-15 01:04:36.778829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.778839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.783504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.783527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.783537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.788002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.788025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.788036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.793352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.793379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.793389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.798088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.798113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.798123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.802618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.802643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.802657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.807322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.807346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.807356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.812772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.812796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.817340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.817363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.817373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.821756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.821780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.821790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.825944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.825967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.830126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.830159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.833985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.834008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.834018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.839328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.839351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.839361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.843408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.843433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.847106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.847129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.847139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.851742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.851766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.851776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.857100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.857124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.857134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.863589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.863612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.863622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.868178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.868210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.871215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.871238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.874671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.874694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.874703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.879631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.879655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.879668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.884359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.884381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.884391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.887909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.887932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.887942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.891271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.891296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.894832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.894855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.894865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.898547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.950 [2024-05-15 01:04:36.898570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.950 [2024-05-15 01:04:36.898580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.950 [2024-05-15 01:04:36.904026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.904053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.904063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.907820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.907842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.907852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.911696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.911719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.911729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.916493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.916516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.916525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.921377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.921400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.921410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.927009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.927031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.927041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.932782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.932805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.932815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.938474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.938497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.938508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.942808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.942831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.947071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.947105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.947115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.951272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.951296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.951306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.955612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.955635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.955650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.960084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.960108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.964382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.964405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.964415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.968170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.968193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.968202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.971649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.971672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.975154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.975178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.975188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.978593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.978617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.982245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.982267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.982277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.986653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.986678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.986688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.991335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.991361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.991371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:36.996329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:36.996353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:36.996363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:37.002615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:37.002639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:37.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.951 [2024-05-15 01:04:37.009052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:49.951 [2024-05-15 01:04:37.009075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.951 [2024-05-15 01:04:37.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.264 [2024-05-15 01:04:37.016505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.264 [2024-05-15 01:04:37.016530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.016551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.024645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.024668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.024678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.032288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.032311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.032321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.040401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.040424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.040434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.047522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.047560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.054451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.054478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.054488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.061310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.061332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.061341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.068100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.068123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.068133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.076247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.076270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.076280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.084015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.084038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.084053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.090942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.090967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.090977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.097878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.097902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.097912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.104818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.104844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.104855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.111775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.118566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.118590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.118600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.123216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.123239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.123249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.127123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.127146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.127156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.130890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.130913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.130923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.134670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.134692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.134702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.138421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.138443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.138453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.141041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.141078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.143114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.143140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.143154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.146707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.146731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.146741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.151079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.151114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.156222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.156261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.160875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.160899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.160910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.164637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.164660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.164670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.265 [2024-05-15 01:04:37.168683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.265 [2024-05-15 01:04:37.168705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.265 [2024-05-15 01:04:37.168715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.173760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.173783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.180158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.180180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.180190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.184499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.184525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.189021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.189048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.189058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.192779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.192803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.192813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.196152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.196174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.199560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.199582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.199591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.203115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.203138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.203148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.207466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.207489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.207499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.211535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.211557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.211567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.215106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.215127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.218523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.218546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.218556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.222112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.222135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.222144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.226076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.226098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.226108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.230684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.230707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.230716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.234456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.234485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.234495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.238240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.238263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.238272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.242710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.242733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.242744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.246861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.246883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.246893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.250099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.250124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.250134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.253536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.253559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.253569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.257078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.257100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.257110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.260894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.260921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.265442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.265469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.265479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.269103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.269128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.269138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.272917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.272943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.272955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.266 [2024-05-15 01:04:37.276460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.266 [2024-05-15 01:04:37.276486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.266 [2024-05-15 01:04:37.276497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.280085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.280111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.280122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.283818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.283842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.283852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.287570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.287592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.287602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.292515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.292541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.292552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.296843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.296867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.296877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.300659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.300684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.300696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.304054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.304079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.304089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.309606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.309630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.309640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.313567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.313591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.313601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.317372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.317401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.317410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.321126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.321160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.267 [2024-05-15 01:04:37.324907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.267 [2024-05-15 01:04:37.324931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.267 [2024-05-15 01:04:37.324940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.328726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.328760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.332564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.332587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.332597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.336348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.336372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.336382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.340105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.340129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.340138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.343845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.343867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.343877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.347638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.347661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.347671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.351157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.526 [2024-05-15 01:04:37.351180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.526 [2024-05-15 01:04:37.351190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.526 [2024-05-15 01:04:37.353408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.353431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.353441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.355914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.355936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.355946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.359733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.359756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.359765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.363966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.363993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.364004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.368398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.368424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.368433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.374087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.374111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.374121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.380752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.380776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.385711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.385740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.385750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.390875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.390900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.390910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.395473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.395497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.395506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.400886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.400909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.400919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.527 [2024-05-15 01:04:37.405656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a3c00) 00:27:50.527 [2024-05-15 01:04:37.405679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.527 [2024-05-15 01:04:37.405689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.527 00:27:50.527 Latency(us) 00:27:50.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.527 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:50.527 nvme0n1 : 2.00 6826.76 853.35 0.00 0.00 2340.81 422.53 8347.22 00:27:50.527 =================================================================================================================== 00:27:50.527 Total : 6826.76 853.35 0.00 0.00 2340.81 422.53 8347.22 00:27:50.527 0 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:50.527 | .driver_specific 00:27:50.527 | .nvme_error 00:27:50.527 | .status_code 00:27:50.527 | .command_transient_transport_error' 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 440 > 0 )) 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3634509 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3634509 ']' 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3634509 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.527 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3634509 00:27:50.787 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:50.787 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:50.787 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3634509' 00:27:50.787 killing process with pid 3634509 00:27:50.787 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3634509 00:27:50.787 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.787 00:27:50.787 Latency(us) 00:27:50.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.787 =================================================================================================================== 00:27:50.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.787 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3634509 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3635334 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3635334 /var/tmp/bperf.sock 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3635334 ']' 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.048 01:04:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:51.048 [2024-05-15 01:04:38.041108] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:51.048 [2024-05-15 01:04:38.041239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635334 ] 00:27:51.309 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.309 [2024-05-15 01:04:38.156506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.309 [2024-05-15 01:04:38.247485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.876 01:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.444 nvme0n1 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:52.444 01:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.444 Running I/O for 2 seconds... 00:27:52.444 [2024-05-15 01:04:39.334466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:27:52.444 [2024-05-15 01:04:39.335209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.335251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.345012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:27:52.444 [2024-05-15 01:04:39.346212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.346245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.354720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:27:52.444 [2024-05-15 01:04:39.356034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.356065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.364346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:27:52.444 [2024-05-15 01:04:39.365781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.365806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.370824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:52.444 [2024-05-15 01:04:39.371398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.371423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.380163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:27:52.444 [2024-05-15 01:04:39.380728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.391216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:52.444 [2024-05-15 01:04:39.392277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.392303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.400312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:27:52.444 [2024-05-15 01:04:39.401358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.401382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.408808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:27:52.444 [2024-05-15 01:04:39.409848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.409871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.418498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:52.444 [2024-05-15 01:04:39.419666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.444 [2024-05-15 01:04:39.419689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:52.444 [2024-05-15 01:04:39.428131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:27:52.444 [2024-05-15 01:04:39.429417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.429440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.437450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:52.445 [2024-05-15 01:04:39.438730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.438754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.446249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:52.445 [2024-05-15 01:04:39.447292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.454688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:52.445 [2024-05-15 01:04:39.455718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.455742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.464301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0630 00:27:52.445 [2024-05-15 01:04:39.465456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.465479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.473854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:27:52.445 [2024-05-15 01:04:39.475130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.475154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.483516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:52.445 [2024-05-15 01:04:39.484915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.484943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.493065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:27:52.445 [2024-05-15 01:04:39.494587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.494613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:52.445 [2024-05-15 01:04:39.499526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:27:52.445 [2024-05-15 01:04:39.500185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.445 [2024-05-15 01:04:39.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.508859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:27:52.705 [2024-05-15 01:04:39.509515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.509538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.517924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1710 00:27:52.705 [2024-05-15 01:04:39.518571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.518594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.527186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:27:52.705 [2024-05-15 01:04:39.527823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.527845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.535634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:27:52.705 [2024-05-15 01:04:39.536266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.536293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.545188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:52.705 [2024-05-15 01:04:39.545942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.545966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.554813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:27:52.705 [2024-05-15 01:04:39.555697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.555723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.564401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:27:52.705 [2024-05-15 01:04:39.565408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.565431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.573966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:27:52.705 [2024-05-15 01:04:39.575101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.575127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.583621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:52.705 [2024-05-15 01:04:39.584875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.584899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.593165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:27:52.705 [2024-05-15 01:04:39.594540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.594564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.602773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:27:52.705 [2024-05-15 01:04:39.604277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.609277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:27:52.705 [2024-05-15 01:04:39.609910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.609932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.618471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:27:52.705 [2024-05-15 01:04:39.619102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.619126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.627724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:27:52.705 [2024-05-15 01:04:39.628353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.628381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.636135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:27:52.705 [2024-05-15 01:04:39.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.636770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:52.705 [2024-05-15 01:04:39.645776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:52.705 [2024-05-15 01:04:39.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.705 [2024-05-15 01:04:39.646549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.655433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:27:52.706 [2024-05-15 01:04:39.656299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.656323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.665006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:27:52.706 [2024-05-15 01:04:39.665995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.666018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.674575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:27:52.706 [2024-05-15 01:04:39.675689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.675711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.684210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:52.706 [2024-05-15 01:04:39.685445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.685467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.693729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:27:52.706 [2024-05-15 01:04:39.695088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.695119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.703327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:52.706 [2024-05-15 01:04:39.704811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.704836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.709844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:27:52.706 [2024-05-15 01:04:39.710470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.719040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:52.706 [2024-05-15 01:04:39.719657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.719679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.729409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:52.706 [2024-05-15 01:04:39.730510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.730532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.738953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:27:52.706 [2024-05-15 01:04:39.740181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.748515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:52.706 [2024-05-15 01:04:39.749861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.749883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.758158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:27:52.706 [2024-05-15 01:04:39.759630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.759654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:52.706 [2024-05-15 01:04:39.764607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:27:52.706 [2024-05-15 01:04:39.765217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.706 [2024-05-15 01:04:39.765240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.773835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:27:52.966 [2024-05-15 01:04:39.774453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.774476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.783132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:27:52.966 [2024-05-15 01:04:39.783726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.783749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.792185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:27:52.966 [2024-05-15 01:04:39.792775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.792800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.800587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:27:52.966 [2024-05-15 01:04:39.801171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.801194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.810187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:27:52.966 [2024-05-15 01:04:39.810891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.810914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.819722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:52.966 [2024-05-15 01:04:39.820556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.820580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.829358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:27:52.966 [2024-05-15 01:04:39.830313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.830336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.838912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:52.966 [2024-05-15 01:04:39.839993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.848477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:27:52.966 [2024-05-15 01:04:39.849679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.849702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.858092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:52.966 [2024-05-15 01:04:39.859422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.859445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.867633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:52.966 [2024-05-15 01:04:39.869087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.869111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.874123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:27:52.966 [2024-05-15 01:04:39.874705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.874727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.883419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:27:52.966 [2024-05-15 01:04:39.883998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.884021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.892460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:52.966 [2024-05-15 01:04:39.893032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.893058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.901009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:27:52.966 [2024-05-15 01:04:39.901582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.901604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.910615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:27:52.966 [2024-05-15 01:04:39.911308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.911332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.920160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:27:52.966 [2024-05-15 01:04:39.920974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.920997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.929818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:27:52.966 [2024-05-15 01:04:39.930760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.930786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.939368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:52.966 [2024-05-15 01:04:39.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.940456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.948943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:27:52.966 [2024-05-15 01:04:39.950136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.950159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.958558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:27:52.966 [2024-05-15 01:04:39.959876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.959901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.968120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:52.966 [2024-05-15 01:04:39.969562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.966 [2024-05-15 01:04:39.969590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:52.966 [2024-05-15 01:04:39.974621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:27:52.967 [2024-05-15 01:04:39.975195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:39.975218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:52.967 [2024-05-15 01:04:39.984266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:27:52.967 [2024-05-15 01:04:39.984964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:39.984990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:52.967 [2024-05-15 01:04:39.993473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:27:52.967 [2024-05-15 01:04:39.994162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:39.994187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:52.967 [2024-05-15 01:04:40.003676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:27:52.967 [2024-05-15 01:04:40.004559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:40.004594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:52.967 [2024-05-15 01:04:40.014575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:27:52.967 [2024-05-15 01:04:40.015353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:40.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:52.967 [2024-05-15 01:04:40.025326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:27:52.967 [2024-05-15 01:04:40.026175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.967 [2024-05-15 01:04:40.026200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.039343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:27:53.227 [2024-05-15 01:04:40.040794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.040821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.048981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:27:53.227 [2024-05-15 01:04:40.050524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.055518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:27:53.227 [2024-05-15 01:04:40.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.064890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:27:53.227 [2024-05-15 01:04:40.065558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.074281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:53.227 [2024-05-15 01:04:40.075021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.075052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.084480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:27:53.227 [2024-05-15 01:04:40.085133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.085156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.094135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:53.227 [2024-05-15 01:04:40.094902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.094933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.103985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:27:53.227 [2024-05-15 01:04:40.104892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.104917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.113728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:27:53.227 [2024-05-15 01:04:40.114750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.114774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.123309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:27:53.227 [2024-05-15 01:04:40.124461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.124485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.132954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:53.227 [2024-05-15 01:04:40.134224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.134247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.142592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ddc00 00:27:53.227 [2024-05-15 01:04:40.143985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.144024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.152164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:27:53.227 [2024-05-15 01:04:40.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.153703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.158696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:53.227 [2024-05-15 01:04:40.159348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.159370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.167994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:27:53.227 [2024-05-15 01:04:40.168637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.168660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.176511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:27:53.227 [2024-05-15 01:04:40.177163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.177192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.186161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:53.227 [2024-05-15 01:04:40.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.186945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.195744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6020 00:27:53.227 [2024-05-15 01:04:40.196632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.227 [2024-05-15 01:04:40.196657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:53.227 [2024-05-15 01:04:40.205323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:27:53.228 [2024-05-15 01:04:40.206334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.206357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.214970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:27:53.228 [2024-05-15 01:04:40.216107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.216132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.224519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:53.228 [2024-05-15 01:04:40.225776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.225800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.234148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:53.228 [2024-05-15 01:04:40.235529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.235553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.243734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:27:53.228 [2024-05-15 01:04:40.245248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.245271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.250198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:27:53.228 [2024-05-15 01:04:40.250839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.250861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.260682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:27:53.228 [2024-05-15 01:04:40.261814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.261838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.270261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:53.228 [2024-05-15 01:04:40.271510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.271533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.279857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:27:53.228 [2024-05-15 01:04:40.281239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.228 [2024-05-15 01:04:40.281266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.228 [2024-05-15 01:04:40.289494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:27:53.489 [2024-05-15 01:04:40.290993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.291025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.295950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:27:53.489 [2024-05-15 01:04:40.296590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.296615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.305194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:27:53.489 [2024-05-15 01:04:40.305823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.305846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.313790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:27:53.489 [2024-05-15 01:04:40.314417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.314439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.323356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:53.489 [2024-05-15 01:04:40.324110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.324133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.332975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:53.489 [2024-05-15 01:04:40.333847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.333870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.342576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:27:53.489 [2024-05-15 01:04:40.343572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.343597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.352145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:27:53.489 [2024-05-15 01:04:40.353264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.353288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.361835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:53.489 [2024-05-15 01:04:40.363093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.363117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.371416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:27:53.489 [2024-05-15 01:04:40.372780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.372804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.381012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:53.489 [2024-05-15 01:04:40.382507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.382534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.387535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:27:53.489 [2024-05-15 01:04:40.388162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.388185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.397082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:27:53.489 [2024-05-15 01:04:40.397828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.397852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.406689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:27:53.489 [2024-05-15 01:04:40.407566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.407588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.416305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:27:53.489 [2024-05-15 01:04:40.417307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.417331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.426011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:27:53.489 [2024-05-15 01:04:40.427140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.427162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.433658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:27:53.489 [2024-05-15 01:04:40.434284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.434306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.442678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:27:53.489 [2024-05-15 01:04:40.443304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.443326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.452081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:27:53.489 [2024-05-15 01:04:40.452823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.452845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.461417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:27:53.489 [2024-05-15 01:04:40.462157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.469928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:27:53.489 [2024-05-15 01:04:40.470660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.470683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.479516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:27:53.489 [2024-05-15 01:04:40.480373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.489 [2024-05-15 01:04:40.480395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:53.489 [2024-05-15 01:04:40.489175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:27:53.490 [2024-05-15 01:04:40.490158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.490193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.498717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:27:53.490 [2024-05-15 01:04:40.499820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.499845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.508344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:27:53.490 [2024-05-15 01:04:40.509569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.509593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.517915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0630 00:27:53.490 [2024-05-15 01:04:40.519270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.519294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.527495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:27:53.490 [2024-05-15 01:04:40.528966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.528990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.534016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:27:53.490 [2024-05-15 01:04:40.534626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.534648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.543265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:27:53.490 [2024-05-15 01:04:40.543869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.490 [2024-05-15 01:04:40.543892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:53.490 [2024-05-15 01:04:40.551780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:27:53.748 [2024-05-15 01:04:40.552384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.552409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:53.748 [2024-05-15 01:04:40.561441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:27:53.748 [2024-05-15 01:04:40.562167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.562190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:53.748 [2024-05-15 01:04:40.570978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:27:53.748 [2024-05-15 01:04:40.571834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.571857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:53.748 [2024-05-15 01:04:40.580607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:27:53.748 [2024-05-15 01:04:40.581581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.581605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:53.748 [2024-05-15 01:04:40.590222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:27:53.748 [2024-05-15 01:04:40.591323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.591351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:53.748 [2024-05-15 01:04:40.599797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:27:53.748 [2024-05-15 01:04:40.601019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.748 [2024-05-15 01:04:40.601049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.609560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:53.749 [2024-05-15 01:04:40.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.610933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.619149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:53.749 [2024-05-15 01:04:40.620623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.620648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.625645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:53.749 [2024-05-15 01:04:40.626262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.626285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.635333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:27:53.749 [2024-05-15 01:04:40.636060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.636083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.644569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:53.749 [2024-05-15 01:04:40.645297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.645320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.653708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:53.749 [2024-05-15 01:04:40.654436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.654459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.663231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:53.749 [2024-05-15 01:04:40.664074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.664097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.672791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:27:53.749 [2024-05-15 01:04:40.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.673788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.682466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:27:53.749 [2024-05-15 01:04:40.683615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.683641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.691811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:27:53.749 [2024-05-15 01:04:40.692582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.692611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.701067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:27:53.749 [2024-05-15 01:04:40.701655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.701681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.710761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:53.749 [2024-05-15 01:04:40.711480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.711505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.720382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:27:53.749 [2024-05-15 01:04:40.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.721243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.730007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:53.749 [2024-05-15 01:04:40.730975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.731000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.739719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:27:53.749 [2024-05-15 01:04:40.740803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.740828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.749294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:27:53.749 [2024-05-15 01:04:40.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.750530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.758961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:27:53.749 [2024-05-15 01:04:40.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.760328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.768596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7c50 00:27:53.749 [2024-05-15 01:04:40.770055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.775085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:27:53.749 [2024-05-15 01:04:40.775669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.784736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:27:53.749 [2024-05-15 01:04:40.785453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.794015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:27:53.749 [2024-05-15 01:04:40.794731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.794764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:53.749 [2024-05-15 01:04:40.803159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:27:53.749 [2024-05-15 01:04:40.803868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.749 [2024-05-15 01:04:40.803891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:54.007 [2024-05-15 01:04:40.812689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:27:54.007 [2024-05-15 01:04:40.813519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.007 [2024-05-15 01:04:40.813543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.007 [2024-05-15 01:04:40.822277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:54.007 [2024-05-15 01:04:40.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.007 [2024-05-15 01:04:40.823258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.007 [2024-05-15 01:04:40.832095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:27:54.008 [2024-05-15 01:04:40.833133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.833162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.841724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:27:54.008 [2024-05-15 01:04:40.842785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.842810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.853013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:27:54.008 [2024-05-15 01:04:40.854297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.854321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.864680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:27:54.008 [2024-05-15 01:04:40.866161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.866187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.876306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:27:54.008 [2024-05-15 01:04:40.877779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.877805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.887275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:54.008 [2024-05-15 01:04:40.888430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.888458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.897909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:27:54.008 [2024-05-15 01:04:40.899075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.899128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.909634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:54.008 [2024-05-15 01:04:40.910886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.910912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.920547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:27:54.008 [2024-05-15 01:04:40.921973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.931466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:27:54.008 [2024-05-15 01:04:40.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.942815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:54.008 [2024-05-15 01:04:40.944383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.944407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.954437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:27:54.008 [2024-05-15 01:04:40.956150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.956175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.962271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:27:54.008 [2024-05-15 01:04:40.962932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.962956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.975817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:27:54.008 [2024-05-15 01:04:40.977082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.987079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:27:54.008 [2024-05-15 01:04:40.988381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:40.988409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:40.999736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:27:54.008 [2024-05-15 01:04:41.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.001669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.008009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:54.008 [2024-05-15 01:04:41.008843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.008870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.020772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:54.008 [2024-05-15 01:04:41.022150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.022176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.031680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:54.008 [2024-05-15 01:04:41.033079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.033104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.041591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:27:54.008 [2024-05-15 01:04:41.042620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.042643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.050042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:27:54.008 [2024-05-15 01:04:41.051064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.051089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.059624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:27:54.008 [2024-05-15 01:04:41.060765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.060790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:54.008 [2024-05-15 01:04:41.069841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:27:54.008 [2024-05-15 01:04:41.071238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.008 [2024-05-15 01:04:41.071264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.080595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:54.266 [2024-05-15 01:04:41.082174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.082206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.091580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:27:54.266 [2024-05-15 01:04:41.093263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.093291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.102726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:27:54.266 [2024-05-15 01:04:41.104412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.104440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.113848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:54.266 [2024-05-15 01:04:41.115558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.115584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.121032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:27:54.266 [2024-05-15 01:04:41.121731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.121754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.131258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:27:54.266 [2024-05-15 01:04:41.131958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.131982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.139927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:27:54.266 [2024-05-15 01:04:41.140557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.140580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.149800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:27:54.266 [2024-05-15 01:04:41.150624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.161186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:27:54.266 [2024-05-15 01:04:41.162180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.162204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.171095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:27:54.266 [2024-05-15 01:04:41.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.266 [2024-05-15 01:04:41.172125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:54.266 [2024-05-15 01:04:41.180753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:27:54.266 [2024-05-15 01:04:41.181878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.181901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.190311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:27:54.267 [2024-05-15 01:04:41.191567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.191592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.199891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:27:54.267 [2024-05-15 01:04:41.201267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.201291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.209525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:27:54.267 [2024-05-15 01:04:41.211020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.211043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.215979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:27:54.267 [2024-05-15 01:04:41.216618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.216645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.225567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:27:54.267 [2024-05-15 01:04:41.226327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.226351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.234856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:27:54.267 [2024-05-15 01:04:41.235615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.243366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:27:54.267 [2024-05-15 01:04:41.244123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.244145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.253006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:27:54.267 [2024-05-15 01:04:41.253876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.253898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.262607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:27:54.267 [2024-05-15 01:04:41.263606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.263633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.272455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:27:54.267 [2024-05-15 01:04:41.273795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.273822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.283904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:27:54.267 [2024-05-15 01:04:41.285233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.285257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.293533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:27:54.267 [2024-05-15 01:04:41.294898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.294923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.303144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:27:54.267 [2024-05-15 01:04:41.304634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.304656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.309653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:27:54.267 [2024-05-15 01:04:41.310281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:54.267 [2024-05-15 01:04:41.318859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:27:54.267 [2024-05-15 01:04:41.319482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.267 [2024-05-15 01:04:41.319506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:54.267 00:27:54.267 Latency(us) 00:27:54.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.267 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.267 nvme0n1 : 2.00 26909.21 105.11 0.00 0.00 4749.98 2328.25 14210.96 00:27:54.267 =================================================================================================================== 00:27:54.267 Total : 26909.21 105.11 0.00 0.00 4749.98 2328.25 14210.96 00:27:54.267 0 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:54.526 | .driver_specific 00:27:54.526 | .nvme_error 00:27:54.526 | .status_code 00:27:54.526 | .command_transient_transport_error' 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3635334 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3635334 ']' 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3635334 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3635334 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3635334' 00:27:54.526 killing process with pid 3635334 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3635334 00:27:54.526 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.526 00:27:54.526 Latency(us) 00:27:54.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.526 =================================================================================================================== 00:27:54.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.526 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3635334 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3636006 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3636006 /var/tmp/bperf.sock 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3636006 ']' 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.097 01:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:55.097 [2024-05-15 01:04:41.925795] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:27:55.097 [2024-05-15 01:04:41.925883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636006 ] 00:27:55.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.097 Zero copy mechanism will not be used. 00:27:55.097 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.097 [2024-05-15 01:04:42.011778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.097 [2024-05-15 01:04:42.102657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.666 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.666 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:55.666 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.666 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.924 01:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.183 nvme0n1 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:56.183 01:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.183 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.183 Zero copy mechanism will not be used. 00:27:56.183 Running I/O for 2 seconds... 00:27:56.183 [2024-05-15 01:04:43.162588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.162832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.162872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.166131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.166364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.166395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.169591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.169809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.169836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.172969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.173187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.173210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.176377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.176589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.176612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.179751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.179961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.179987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.183149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.183358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.183381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.187039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.187311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.190856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.191078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.191103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.195606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.195818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.195843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.198954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.199168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.199192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.202286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.202503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.202526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.205559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.205766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.205788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.208909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.209122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.209144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.212187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.212398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.212419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.215724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.215937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.215961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.219746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.219954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.219977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.225001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.225229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.229060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.229272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.229299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.233061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.233274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.233299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.237070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.237271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.237295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.240981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.241210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.241233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.183 [2024-05-15 01:04:43.245709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.183 [2024-05-15 01:04:43.245929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.183 [2024-05-15 01:04:43.245952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.249770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.249980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.250003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.253119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.253328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.253349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.256449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.256658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.256679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.260179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.260400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.263829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.264040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.264067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.267158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.267368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.267389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.271476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.271681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.271704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.277183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.277391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.277414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.283118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.283328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.283350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.289145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.289359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.298209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.298291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.298317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.304471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.304690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.304713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.443 [2024-05-15 01:04:43.310423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.443 [2024-05-15 01:04:43.310631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.443 [2024-05-15 01:04:43.310658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.317191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.317393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.317417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.323618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.323877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.323900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.329469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.329783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.335095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.335383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.335407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.342169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.342420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.342444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.347770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.348020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.353364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.353595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.353617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.358710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.358943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.358966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.364059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.364249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.364271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.368833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.369077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.369101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.374079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.374351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.374373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.379264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.379498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.384246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.384444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.388006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.388181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.388202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.390958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.391140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.391162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.393902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.394102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.397115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.397304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.397335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.401280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.401456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.401481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.404289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.404469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.404492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.407286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.407486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.407509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.411325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.411504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.411527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.415388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.415567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.415589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.420475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.420651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.420675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.423897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.424083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.424106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.427186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.427362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.427384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.430270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.430445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.430468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.433214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.444 [2024-05-15 01:04:43.433392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.444 [2024-05-15 01:04:43.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.444 [2024-05-15 01:04:43.436255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.436431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.439353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.439524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.439546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.442357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.442529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.442551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.445333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.445509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.445530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.449556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.449756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.452740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.452917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.452939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.455732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.455907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.455933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.458737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.458913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.458935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.461729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.461903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.461924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.465439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.465624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.465645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.470413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.470658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.475507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.475684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.475706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.481006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.481288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.481310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.487251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.487471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.487493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.491523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.491700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.491722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.494614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.494798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.494829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.497590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.497767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.497788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.500562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.500741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.500767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.445 [2024-05-15 01:04:43.503564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.445 [2024-05-15 01:04:43.503738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.445 [2024-05-15 01:04:43.503760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.705 [2024-05-15 01:04:43.506510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.705 [2024-05-15 01:04:43.506680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.705 [2024-05-15 01:04:43.506706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.705 [2024-05-15 01:04:43.509483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.705 [2024-05-15 01:04:43.509656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.705 [2024-05-15 01:04:43.509678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.705 [2024-05-15 01:04:43.512718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.705 [2024-05-15 01:04:43.512909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.705 [2024-05-15 01:04:43.512932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.705 [2024-05-15 01:04:43.517108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.705 [2024-05-15 01:04:43.517403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.705 [2024-05-15 01:04:43.517425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.705 [2024-05-15 01:04:43.522295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.522489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.527178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.527410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.532924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.533129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.533151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.537946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.538200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.538223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.542988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.543173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.546479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.546631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.546653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.549446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.549593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.549615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.552653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.552845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.552866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.556786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.556945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.556967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.559652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.559812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.559837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.562474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.562626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.562648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.565294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.565447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.565469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.568368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.568567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.568589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.572440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.572706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.577461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.577650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.577672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.582103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.582303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.582325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.588052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.588299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.588321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.593079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.593257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.598139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.598358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.602242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.602394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.602416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.605158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.605313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.607951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.608110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.608132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.610792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.610945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.610967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.614098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.614257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.614279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.618255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.618430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.621334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.621488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.621510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.624354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.624512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.627392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.627547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.706 [2024-05-15 01:04:43.627568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.706 [2024-05-15 01:04:43.630493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.706 [2024-05-15 01:04:43.630647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.630668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.633588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.633743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.633765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.636735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.636890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.636911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.639812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.639967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.642868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.643021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.643048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.645908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.646068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.646089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.648967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.649124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.649146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.651977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.652136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.652159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.655070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.655223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.658147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.658297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.658318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.661154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.661306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.661327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.664746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.664972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.669617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.669809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.674997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.675254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.675276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.681111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.681390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.681413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.686165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.686355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.686383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.691121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.691312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.691333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.696177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.696379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.696401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.701211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.701455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.701476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.706212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.706413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.706433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.711184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.711383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.711409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.716192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.716443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.716467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.721269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.721517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.721541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.726196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.726355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.726375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.731273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.731439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.731460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.736232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.736488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.736511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.741257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.741454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.741475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.746254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.746411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.746432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.751212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.751362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.751384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.756018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.756178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.707 [2024-05-15 01:04:43.756203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.707 [2024-05-15 01:04:43.759373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.707 [2024-05-15 01:04:43.759511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.708 [2024-05-15 01:04:43.759532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.708 [2024-05-15 01:04:43.762170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.708 [2024-05-15 01:04:43.762319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.708 [2024-05-15 01:04:43.762338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.708 [2024-05-15 01:04:43.764960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.708 [2024-05-15 01:04:43.765103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.708 [2024-05-15 01:04:43.765128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.708 [2024-05-15 01:04:43.767856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.708 [2024-05-15 01:04:43.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.708 [2024-05-15 01:04:43.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.770843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.770984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.771005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.773798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.773936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.773957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.777190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.777318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.780362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.780525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.783348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.783493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.783513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.786248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.786390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.786409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.789265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.789403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.789424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.792169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.792314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.792335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.973 [2024-05-15 01:04:43.795090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.973 [2024-05-15 01:04:43.795229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.973 [2024-05-15 01:04:43.795252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.798055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.798198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.798219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.801015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.801184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.803982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.804141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.804164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.807246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.807409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.807432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.810771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.810937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.810960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.814304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.814470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.814501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.819161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.819295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.819319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.822748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.822908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.825933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.826086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.828920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.829076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.829096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.832836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.833012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.833032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.837727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.837930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.837952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.842706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.842886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.842907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.849319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.849483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.849504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.853724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.853859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.853881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.856690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.856836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.856857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.859571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.859716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.859736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.862442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.862587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.862607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.865523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.865669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.865691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.869088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.869221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.869241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.872894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.873040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.873067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.875886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.876030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.876058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.880057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.880228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.880248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.882895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.883050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.883075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.885697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.885850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.885871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.889107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.889282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.889302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.893996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.894194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.974 [2024-05-15 01:04:43.894218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.974 [2024-05-15 01:04:43.899113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.974 [2024-05-15 01:04:43.899244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.899269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.904893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.905156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.905178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.909917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.910112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.910134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.914922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.915081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.915104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.919628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.919815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.919838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.923269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.923409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.923431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.926126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.926269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.929015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.929168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.929189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.931857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.932019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.934784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.934972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.934993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.938027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.938185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.938207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.942962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.943153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.943174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.947662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.947838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.947862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.953445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.953660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.953682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.958667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.958847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.958868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.962550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.962690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.962712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.965409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.965550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.965571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.968165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.968298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.968318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.971035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.971187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.971208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.974113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.974246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.974267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.978142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.978269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.978292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.981488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.981626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.981647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.984493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.984641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.984662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.987477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.987620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.987645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.990472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.990617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.990642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.993511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.993648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.993672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.996499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.996647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.996671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:43.999459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.975 [2024-05-15 01:04:43.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.975 [2024-05-15 01:04:43.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.975 [2024-05-15 01:04:44.002466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.002609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.002632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.005513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.005655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.005676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.008490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.008643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.008667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.011416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.011576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.014319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.014482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.017309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.017454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.017475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.020307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.020449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.020472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.023339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.023482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.023502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.027167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.027320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.027341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.030041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.030195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.030216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.976 [2024-05-15 01:04:44.032841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:56.976 [2024-05-15 01:04:44.032984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.976 [2024-05-15 01:04:44.033005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.036006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.036213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.040859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.041086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.041111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.045871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.046085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.046106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.051659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.056686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.056835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.056857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.061718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.061916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.061938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.066783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.066988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.067013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.071801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.072077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.072102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.076635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.076814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.076838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.081729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.081902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.081924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.085889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.085986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.086006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.239 [2024-05-15 01:04:44.088850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.239 [2024-05-15 01:04:44.088922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.239 [2024-05-15 01:04:44.088941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.091683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.091754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.091773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.094660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.094781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.094805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.098468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.098708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.098735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.103422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.103607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.103630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.108283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.108388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.108410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.114987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.115131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.115159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.121535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.121704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.121725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.128170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.128350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.128379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.134497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.134722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.141175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.141315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.141338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.147769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.147932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.154346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.154424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.160844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.161050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.161073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.167344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.167434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.167456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.173840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.173914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.173937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.178908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.178998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.179019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.183914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.184074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.184094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.189013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.189149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.189170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.193951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.194105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.198951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.199055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.199077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.203534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.203617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.203639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.206384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.206450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.206473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.209267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.209329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.209354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.212470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.212551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.212571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.215834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.215985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.220818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.220927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.220950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.225926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.226022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.240 [2024-05-15 01:04:44.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.240 [2024-05-15 01:04:44.231549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.240 [2024-05-15 01:04:44.231644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.231667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.237470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.237623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.237644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.242491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.242579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.242600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.247517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.247662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.247683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.252578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.252733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.252754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.257538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.257694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.257716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.262507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.262643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.262664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.267576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.267735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.267757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.272647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.272801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.277590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.277752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.277775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.282559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.282707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.282730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.287633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.287804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.287825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.292658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.292751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.292772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.241 [2024-05-15 01:04:44.297633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.241 [2024-05-15 01:04:44.297764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.241 [2024-05-15 01:04:44.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.302628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.302789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.302814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.307657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.307733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.307754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.312662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.312809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.312830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.317589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.317764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.317785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.321776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.321888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.321908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.326716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.326847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.326869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.333288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.333392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.333415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.339347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.339444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.339465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.343040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.343115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.343136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.345888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.345951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.345971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.348719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.348791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.351605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.351670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.351691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.354438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.354501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.357293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.357359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.360126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.360188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.360207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.362980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.363040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.363064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.365812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.365871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.365891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.368679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.368737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.368755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.371561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.371622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.371643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.374401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.374463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.374482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.377256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.377316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.377335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.380096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.380159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.380181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.382965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.383026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.383050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.385797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.505 [2024-05-15 01:04:44.385878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.505 [2024-05-15 01:04:44.388648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.505 [2024-05-15 01:04:44.388711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.388734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.391484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.391546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.391567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.394326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.394386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.394406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.397167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.397249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.400053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.400115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.400134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.403499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.403553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.403573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.407998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.408104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.408126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.412453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.412512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.412532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.416623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.416683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.416703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.420502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.420633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.420653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.424227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.424334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.424354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.428075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.428129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.428149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.431740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.431822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.434828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.434929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.434949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.437735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.437795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.437817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.440573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.440634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.440653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.443453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.443509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.443528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.446305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.446364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.449202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.449267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.449286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.452203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.452279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.452299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.456060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.456151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.456171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.461027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.461206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.461226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.466358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.466428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.466448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.472553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.472700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.472722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.477585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.477662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.477682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.482582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.482736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.482756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.487544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.487700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.487720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.492284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.506 [2024-05-15 01:04:44.492396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.506 [2024-05-15 01:04:44.492418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.506 [2024-05-15 01:04:44.496658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.496761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.496783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.501675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.501825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.501846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.508041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.508180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.508201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.512785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.512895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.516481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.516561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.519364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.519424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.519444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.522279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.522360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.525307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.525404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.525423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.529326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.529419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.529438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.534336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.534479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.539614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.539668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.539692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.545240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.545365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.550279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.550411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.550432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.555312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.555373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.555393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.560300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.560445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.560466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.507 [2024-05-15 01:04:44.565266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.507 [2024-05-15 01:04:44.565412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.507 [2024-05-15 01:04:44.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.570266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.570365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.570385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.575241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.575447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.575467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.580220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.580400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.580421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.585258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.585356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.585375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.590272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.595355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.595521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.595543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.600324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.600476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.600500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.605361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.605553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.605575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.610353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.610489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.610511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.615560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.615710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.615733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.620620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.620773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.620794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.625578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.625732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.625753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.630311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.630400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.630420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.634669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.634779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.634799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.639690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.639860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.639881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.645999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.646085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.646109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.650885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.650977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.650998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.654624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.654684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.654706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.657497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.769 [2024-05-15 01:04:44.657558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.769 [2024-05-15 01:04:44.657577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.769 [2024-05-15 01:04:44.660330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.660391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.660411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.663236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.663303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.663323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.666095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.666156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.666175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.668987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.669053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.669072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.671846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.671909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.671929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.674709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.674767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.677561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.677625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.677644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.680426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.680482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.683311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.683377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.683397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.686285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.690284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.690397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.690421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.695302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.695449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.695470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.700746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.700830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.700852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.705850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.706050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.706076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.710839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.710981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.711012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.715883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.716074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.716102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.720892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.721023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.721054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.725889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.726042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.726069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.730906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.731056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.731082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.735967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.736128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.736153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.740945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.741091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.741114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.745946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.746089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.746111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.751016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.751156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.751181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.756038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.756236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.756258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.760985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.761160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.761184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.766035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.766145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.770982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.771202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.770 [2024-05-15 01:04:44.775995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.770 [2024-05-15 01:04:44.776153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.770 [2024-05-15 01:04:44.776175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.781066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.781212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.781235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.786027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.786186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.786208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.791065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.791247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.791269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.796058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.796237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.801022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.801157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.801180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.806017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.806192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.806216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.811043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.811207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.811229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.816082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.816152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.816174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.821080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.821224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.821247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.826159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.826308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.826330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:57.771 [2024-05-15 01:04:44.831164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:57.771 [2024-05-15 01:04:44.831265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.771 [2024-05-15 01:04:44.831292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.836196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.836336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.836360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.841146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.841306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.841330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.846147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.846250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.846273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.851174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.851349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.851377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.856159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.856291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.856316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.861207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.861355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.861378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.866175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.866330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.866353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.871132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.871277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.871300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.876201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.876360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.876382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.881153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.881299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.881325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.886195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.886264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.886286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.891194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.891329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.891352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.896242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.896415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.896439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.901224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.901387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.901412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.906237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.906330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.906354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.911240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.911412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.911435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.916222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.916352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.916374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.921226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.921312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.921334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.031 [2024-05-15 01:04:44.926254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.031 [2024-05-15 01:04:44.926398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.031 [2024-05-15 01:04:44.926420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.931271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.931362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.931384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.936272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.936418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.941237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.941382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.941405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.946237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.946327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.946350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.951260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.951440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.956366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.956509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.956542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.961418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.961598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.961627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.966380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.966522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.966546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.971387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.971488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.971512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.976405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.976562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.976584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.981368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.981513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.981536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.986371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.986476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.986498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.991356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.991542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.991564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:44.996425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:44.996517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:44.996541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.001434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.001584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.006383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.006528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.006550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.011462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.011619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.011645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.016433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.016572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.016594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.021489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.021693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.021715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.026512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.026663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.026686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.031542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.031746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.031767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.036544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.036685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.036707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.041623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.041778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.041800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.046587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.046733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.046755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.051182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.051348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.055243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.055445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.055467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.060300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.060513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.060538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.032 [2024-05-15 01:04:45.066603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.032 [2024-05-15 01:04:45.066751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.032 [2024-05-15 01:04:45.066773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.071279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.071408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.071431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.074906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.075015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.075037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.077771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.077886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.077907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.080593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.080707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.080729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.083469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.083585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.083607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.086304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.086421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.086445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.089205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.089376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.089398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.033 [2024-05-15 01:04:45.092336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.033 [2024-05-15 01:04:45.092501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.033 [2024-05-15 01:04:45.092522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.097080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.097259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.097286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.101980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.102117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.102142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.107643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.107767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.107790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.112777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.112980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.117718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.117934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.117956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.122721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.122907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.127797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.127965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.127988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.132767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.132924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.132945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.137729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.137867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.137889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.142757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.142973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.147752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.147917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.292 [2024-05-15 01:04:45.152779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:58.292 [2024-05-15 01:04:45.152989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.292 [2024-05-15 01:04:45.153014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:58.292 00:27:58.292 Latency(us) 00:27:58.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.292 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:58.292 nvme0n1 : 2.00 7267.23 908.40 0.00 0.00 2197.47 1293.47 8657.65 00:27:58.292 =================================================================================================================== 00:27:58.292 Total : 7267.23 908.40 0.00 0.00 2197.47 1293.47 8657.65 00:27:58.292 0 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:58.292 | .driver_specific 00:27:58.292 | .nvme_error 00:27:58.292 | .status_code 00:27:58.292 | .command_transient_transport_error' 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 469 > 0 )) 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3636006 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3636006 ']' 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3636006 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3636006 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3636006' 00:27:58.292 killing process with pid 3636006 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3636006 00:27:58.292 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.292 00:27:58.292 Latency(us) 00:27:58.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.292 =================================================================================================================== 00:27:58.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.292 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3636006 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3633589 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3633589 ']' 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3633589 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3633589 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3633589' 00:27:58.861 killing process with pid 3633589 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3633589 00:27:58.861 [2024-05-15 01:04:45.761099] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:58.861 01:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3633589 00:27:59.432 00:27:59.432 real 0m16.881s 00:27:59.432 user 0m32.125s 00:27:59.432 sys 0m3.488s 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.432 ************************************ 00:27:59.432 END TEST nvmf_digest_error 00:27:59.432 ************************************ 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.432 rmmod nvme_tcp 00:27:59.432 rmmod nvme_fabrics 00:27:59.432 rmmod nvme_keyring 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3633589 ']' 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3633589 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3633589 ']' 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3633589 00:27:59.432 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3633589) - No such process 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3633589 is not found' 00:27:59.432 Process with pid 3633589 is not found 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.432 01:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.335 01:04:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.335 00:28:01.335 real 1m30.327s 00:28:01.335 user 2m10.025s 00:28:01.335 sys 0m15.324s 00:28:01.335 01:04:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:01.335 01:04:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:01.335 ************************************ 00:28:01.335 END TEST nvmf_digest 00:28:01.335 ************************************ 00:28:01.335 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:28:01.335 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:28:01.335 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy-fallback == phy ]] 00:28:01.335 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:28:01.335 01:04:48 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.335 01:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:28:01.594 00:28:01.594 real 16m0.976s 00:28:01.594 user 32m56.596s 00:28:01.594 sys 4m36.444s 00:28:01.594 01:04:48 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:01.594 01:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 ************************************ 00:28:01.594 END TEST nvmf_tcp 00:28:01.594 ************************************ 00:28:01.594 01:04:48 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:28:01.594 01:04:48 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:01.594 01:04:48 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:01.595 01:04:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:01.595 01:04:48 -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 ************************************ 00:28:01.595 START TEST spdkcli_nvmf_tcp 00:28:01.595 ************************************ 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:01.595 * Looking for test storage... 00:28:01.595 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3637482 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3637482 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3637482 ']' 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.595 01:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:01.853 [2024-05-15 01:04:48.701526] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:28:01.853 [2024-05-15 01:04:48.701659] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637482 ] 00:28:01.853 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.853 [2024-05-15 01:04:48.831170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:02.109 [2024-05-15 01:04:48.924019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.109 [2024-05-15 01:04:48.924092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.366 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.366 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:28:02.366 01:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:02.366 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.366 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.625 01:04:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:02.625 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:02.625 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:02.625 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:02.625 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:02.625 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:02.625 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:02.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:02.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:02.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:02.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:02.625 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:02.625 ' 00:28:05.159 [2024-05-15 01:04:51.771961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.095 [2024-05-15 01:04:52.929148] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:06.095 [2024-05-15 01:04:52.929477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:08.034 [2024-05-15 01:04:55.059900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:09.937 [2024-05-15 01:04:56.890015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:11.313 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:11.313 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:11.313 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:11.313 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:11.313 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:11.313 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:11.313 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:11.574 01:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.834 01:04:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:11.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:11.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:11.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:11.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:11.834 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:11.834 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:11.834 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:11.834 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:11.834 ' 00:28:17.107 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:17.107 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:17.107 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:17.107 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:17.107 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:17.108 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:17.108 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:17.108 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:17.108 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3637482 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3637482 ']' 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3637482 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637482 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637482' 00:28:17.108 killing process with pid 3637482 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3637482 00:28:17.108 [2024-05-15 01:05:03.884782] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:17.108 01:05:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3637482 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3637482 ']' 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3637482 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3637482 ']' 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3637482 00:28:17.368 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3637482) - No such process 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3637482 is not found' 00:28:17.368 Process with pid 3637482 is not found 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:17.368 00:28:17.368 real 0m15.853s 00:28:17.368 user 0m31.984s 00:28:17.368 sys 0m0.758s 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.368 01:05:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.368 ************************************ 00:28:17.368 END TEST spdkcli_nvmf_tcp 00:28:17.368 ************************************ 00:28:17.368 01:05:04 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:17.368 01:05:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:17.368 01:05:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.368 01:05:04 -- common/autotest_common.sh@10 -- # set +x 00:28:17.629 ************************************ 00:28:17.629 START TEST nvmf_identify_passthru 00:28:17.629 ************************************ 00:28:17.629 01:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:17.629 * Looking for test storage... 00:28:17.629 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:17.629 01:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.629 01:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:17.629 01:05:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.629 01:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.629 01:05:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.629 01:05:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.629 01:05:04 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.629 01:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:22.901 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:22.901 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:22.901 Found net devices under 0000:27:00.0: cvl_0_0 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:22.901 Found net devices under 0000:27:00.1: cvl_0_1 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:28:22.901 00:28:22.901 --- 10.0.0.2 ping statistics --- 00:28:22.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.901 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:22.901 00:28:22.901 --- 10.0.0.1 ping statistics --- 00:28:22.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.901 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.901 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.902 01:05:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:28:22.902 01:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:03:00.0 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:03:00.0 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:03:00.0 ']' 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:22.902 01:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:23.161 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.541 01:05:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=233442AA2262 00:28:24.541 01:05:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:28:24.541 01:05:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:24.541 01:05:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:24.541 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=Micron_7450_MTFDKBA960TFR 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3644524 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3644524 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3644524 ']' 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:25.477 01:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.477 01:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:25.477 [2024-05-15 01:05:12.530448] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:28:25.477 [2024-05-15 01:05:12.530567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.736 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.736 [2024-05-15 01:05:12.657647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.736 [2024-05-15 01:05:12.753196] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.736 [2024-05-15 01:05:12.753238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.736 [2024-05-15 01:05:12.753248] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.736 [2024-05-15 01:05:12.753257] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.736 [2024-05-15 01:05:12.753264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.736 [2024-05-15 01:05:12.753466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.736 [2024-05-15 01:05:12.753540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.736 [2024-05-15 01:05:12.753642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.736 [2024-05-15 01:05:12.753652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:28:26.304 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.304 INFO: Log level set to 20 00:28:26.304 INFO: Requests: 00:28:26.304 { 00:28:26.304 "jsonrpc": "2.0", 00:28:26.304 "method": "nvmf_set_config", 00:28:26.304 "id": 1, 00:28:26.304 "params": { 00:28:26.304 "admin_cmd_passthru": { 00:28:26.304 "identify_ctrlr": true 00:28:26.304 } 00:28:26.304 } 00:28:26.304 } 00:28:26.304 00:28:26.304 INFO: response: 00:28:26.304 { 00:28:26.304 "jsonrpc": "2.0", 00:28:26.304 "id": 1, 00:28:26.304 "result": true 00:28:26.304 } 00:28:26.304 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.304 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.304 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.304 INFO: Setting log level to 20 00:28:26.304 INFO: Setting log level to 20 00:28:26.304 INFO: Log level set to 20 00:28:26.304 INFO: Log level set to 20 00:28:26.304 INFO: Requests: 00:28:26.304 { 00:28:26.304 "jsonrpc": "2.0", 00:28:26.304 "method": "framework_start_init", 00:28:26.304 "id": 1 00:28:26.304 } 00:28:26.304 00:28:26.304 INFO: Requests: 00:28:26.304 { 00:28:26.304 "jsonrpc": "2.0", 00:28:26.304 "method": "framework_start_init", 00:28:26.304 "id": 1 00:28:26.304 } 00:28:26.304 00:28:26.576 [2024-05-15 01:05:13.386572] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:26.576 INFO: response: 00:28:26.576 { 00:28:26.576 "jsonrpc": "2.0", 00:28:26.576 "id": 1, 00:28:26.576 "result": true 00:28:26.576 } 00:28:26.576 00:28:26.576 INFO: response: 00:28:26.576 { 00:28:26.576 "jsonrpc": "2.0", 00:28:26.576 "id": 1, 00:28:26.576 "result": true 00:28:26.576 } 00:28:26.576 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.576 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.576 INFO: Setting log level to 40 00:28:26.576 INFO: Setting log level to 40 00:28:26.576 INFO: Setting log level to 40 00:28:26.576 [2024-05-15 01:05:13.400593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.576 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.576 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.576 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.837 Nvme0n1 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.837 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.837 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.837 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.837 [2024-05-15 01:05:13.828419] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:26.837 [2024-05-15 01:05:13.828747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.837 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.837 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.837 [ 00:28:26.837 { 00:28:26.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:26.837 "subtype": "Discovery", 00:28:26.837 "listen_addresses": [], 00:28:26.837 "allow_any_host": true, 00:28:26.837 "hosts": [] 00:28:26.837 }, 00:28:26.837 { 00:28:26.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.837 "subtype": "NVMe", 00:28:26.837 "listen_addresses": [ 00:28:26.838 { 00:28:26.838 "trtype": "TCP", 00:28:26.838 "adrfam": "IPv4", 00:28:26.838 "traddr": "10.0.0.2", 00:28:26.838 "trsvcid": "4420" 00:28:26.838 } 00:28:26.838 ], 00:28:26.838 "allow_any_host": true, 00:28:26.838 "hosts": [], 00:28:26.838 "serial_number": "SPDK00000000000001", 00:28:26.838 "model_number": "SPDK bdev Controller", 00:28:26.838 "max_namespaces": 1, 00:28:26.838 "min_cntlid": 1, 00:28:26.838 "max_cntlid": 65519, 00:28:26.838 "namespaces": [ 00:28:26.838 { 00:28:26.838 "nsid": 1, 00:28:26.838 "bdev_name": "Nvme0n1", 00:28:26.838 "name": "Nvme0n1", 00:28:26.838 "nguid": "000000000000000100A0752342AA2262", 00:28:26.838 "uuid": "00000000-0000-0001-00a0-752342aa2262" 00:28:26.838 } 00:28:26.838 ] 00:28:26.838 } 00:28:26.838 ] 00:28:26.838 01:05:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.838 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:26.838 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:26.838 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.097 01:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=233442AA2262 00:28:27.097 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:27.097 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:27.097 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=Micron_7450_MTFDKBA960TFR 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 233442AA2262 '!=' 233442AA2262 ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' Micron_7450_MTFDKBA960TFR '!=' Micron_7450_MTFDKBA960TFR ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:27.357 01:05:14 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.357 rmmod nvme_tcp 00:28:27.357 rmmod nvme_fabrics 00:28:27.357 rmmod nvme_keyring 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3644524 ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3644524 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3644524 ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3644524 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3644524 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3644524' 00:28:27.357 killing process with pid 3644524 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3644524 00:28:27.357 [2024-05-15 01:05:14.292541] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:27.357 01:05:14 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3644524 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.736 01:05:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.736 01:05:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:28.736 01:05:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.641 01:05:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.641 00:28:30.641 real 0m13.117s 00:28:30.641 user 0m13.581s 00:28:30.641 sys 0m4.815s 00:28:30.641 01:05:17 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.641 01:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:30.641 ************************************ 00:28:30.641 END TEST nvmf_identify_passthru 00:28:30.641 ************************************ 00:28:30.641 01:05:17 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:30.641 01:05:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:30.641 01:05:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.641 01:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:30.641 ************************************ 00:28:30.641 START TEST nvmf_dif 00:28:30.641 ************************************ 00:28:30.641 01:05:17 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:30.641 * Looking for test storage... 00:28:30.641 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:30.641 01:05:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.641 01:05:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.641 01:05:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.641 01:05:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.641 01:05:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.641 01:05:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.641 01:05:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:30.641 01:05:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:30.641 01:05:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.641 01:05:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.641 01:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:30.641 01:05:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.900 01:05:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:30.900 01:05:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.900 01:05:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.900 01:05:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:36.211 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:36.211 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:36.211 Found net devices under 0000:27:00.0: cvl_0_0 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:36.211 Found net devices under 0000:27:00.1: cvl_0_1 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.211 01:05:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.212 01:05:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:28:36.212 00:28:36.212 --- 10.0.0.2 ping statistics --- 00:28:36.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.212 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:28:36.212 00:28:36.212 --- 10.0.0.1 ping statistics --- 00:28:36.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.212 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:36.212 01:05:23 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:38.746 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.746 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:28:38.746 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.746 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.746 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.746 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.746 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.746 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.746 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.746 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.747 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.747 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.747 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.747 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.747 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.747 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:38.747 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:38.747 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.747 01:05:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:38.747 01:05:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3650341 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3650341 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3650341 ']' 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:38.747 01:05:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:38.747 01:05:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:39.005 [2024-05-15 01:05:25.866015] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:28:39.005 [2024-05-15 01:05:25.866124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.005 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.005 [2024-05-15 01:05:25.985594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.262 [2024-05-15 01:05:26.077438] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.262 [2024-05-15 01:05:26.077476] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.262 [2024-05-15 01:05:26.077485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.262 [2024-05-15 01:05:26.077494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.262 [2024-05-15 01:05:26.077501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.262 [2024-05-15 01:05:26.077528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.522 01:05:26 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:39.522 01:05:26 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:28:39.522 01:05:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:39.522 01:05:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.522 01:05:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 01:05:26 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.782 01:05:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:39.782 01:05:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 [2024-05-15 01:05:26.616619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.782 01:05:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 ************************************ 00:28:39.782 START TEST fio_dif_1_default 00:28:39.782 ************************************ 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 bdev_null0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 [2024-05-15 01:05:26.688585] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:39.782 [2024-05-15 01:05:26.688813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:39.782 { 00:28:39.782 "params": { 00:28:39.782 "name": "Nvme$subsystem", 00:28:39.782 "trtype": "$TEST_TRANSPORT", 00:28:39.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.782 "adrfam": "ipv4", 00:28:39.782 "trsvcid": "$NVMF_PORT", 00:28:39.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.782 "hdgst": ${hdgst:-false}, 00:28:39.782 "ddgst": ${ddgst:-false} 00:28:39.782 }, 00:28:39.782 "method": "bdev_nvme_attach_controller" 00:28:39.782 } 00:28:39.782 EOF 00:28:39.782 )") 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:39.782 01:05:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:39.782 "params": { 00:28:39.782 "name": "Nvme0", 00:28:39.782 "trtype": "tcp", 00:28:39.782 "traddr": "10.0.0.2", 00:28:39.782 "adrfam": "ipv4", 00:28:39.782 "trsvcid": "4420", 00:28:39.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.782 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:39.782 "hdgst": false, 00:28:39.782 "ddgst": false 00:28:39.783 }, 00:28:39.783 "method": "bdev_nvme_attach_controller" 00:28:39.783 }' 00:28:39.783 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:39.783 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:39.783 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # break 00:28:39.783 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:39.783 01:05:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:40.348 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:40.348 fio-3.35 00:28:40.348 Starting 1 thread 00:28:40.348 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.553 00:28:52.553 filename0: (groupid=0, jobs=1): err= 0: pid=3650954: Wed May 15 01:05:37 2024 00:28:52.553 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:28:52.553 slat (usec): min=3, max=139, avg= 7.04, stdev= 4.43 00:28:52.553 clat (usec): min=40811, max=43090, avg=40988.78, stdev=147.37 00:28:52.553 lat (usec): min=40818, max=43116, avg=40995.82, stdev=147.54 00:28:52.553 clat percentiles (usec): 00:28:52.553 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:28:52.553 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:52.554 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:52.554 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:28:52.554 | 99.99th=[43254] 00:28:52.554 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:28:52.554 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:52.554 lat (msec) : 50=100.00% 00:28:52.554 cpu : usr=95.97%, sys=3.74%, ctx=14, majf=0, minf=1634 00:28:52.554 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:52.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.554 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.554 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:52.554 00:28:52.554 Run status group 0 (all jobs): 00:28:52.554 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10007-10007msec 00:28:52.554 ----------------------------------------------------- 00:28:52.554 Suppressions used: 00:28:52.554 count bytes template 00:28:52.554 1 8 /usr/src/fio/parse.c 00:28:52.554 1 8 libtcmalloc_minimal.so 00:28:52.554 1 904 libcrypto.so 00:28:52.554 ----------------------------------------------------- 00:28:52.554 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 00:28:52.554 real 0m11.900s 00:28:52.554 user 0m25.493s 00:28:52.554 sys 0m0.843s 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 ************************************ 00:28:52.554 END TEST fio_dif_1_default 00:28:52.554 ************************************ 00:28:52.554 01:05:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:52.554 01:05:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:52.554 01:05:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 ************************************ 00:28:52.554 START TEST fio_dif_1_multi_subsystems 00:28:52.554 ************************************ 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 bdev_null0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 [2024-05-15 01:05:38.649254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 bdev_null1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.554 { 00:28:52.554 "params": { 00:28:52.554 "name": "Nvme$subsystem", 00:28:52.554 "trtype": "$TEST_TRANSPORT", 00:28:52.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.554 "adrfam": "ipv4", 00:28:52.554 "trsvcid": "$NVMF_PORT", 00:28:52.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.554 "hdgst": ${hdgst:-false}, 00:28:52.554 "ddgst": ${ddgst:-false} 00:28:52.554 }, 00:28:52.554 "method": "bdev_nvme_attach_controller" 00:28:52.554 } 00:28:52.554 EOF 00:28:52.554 )") 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:52.554 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.555 { 00:28:52.555 "params": { 00:28:52.555 "name": "Nvme$subsystem", 00:28:52.555 "trtype": "$TEST_TRANSPORT", 00:28:52.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.555 "adrfam": "ipv4", 00:28:52.555 "trsvcid": "$NVMF_PORT", 00:28:52.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.555 "hdgst": ${hdgst:-false}, 00:28:52.555 "ddgst": ${ddgst:-false} 00:28:52.555 }, 00:28:52.555 "method": "bdev_nvme_attach_controller" 00:28:52.555 } 00:28:52.555 EOF 00:28:52.555 )") 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:52.555 "params": { 00:28:52.555 "name": "Nvme0", 00:28:52.555 "trtype": "tcp", 00:28:52.555 "traddr": "10.0.0.2", 00:28:52.555 "adrfam": "ipv4", 00:28:52.555 "trsvcid": "4420", 00:28:52.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.555 "hdgst": false, 00:28:52.555 "ddgst": false 00:28:52.555 }, 00:28:52.555 "method": "bdev_nvme_attach_controller" 00:28:52.555 },{ 00:28:52.555 "params": { 00:28:52.555 "name": "Nvme1", 00:28:52.555 "trtype": "tcp", 00:28:52.555 "traddr": "10.0.0.2", 00:28:52.555 "adrfam": "ipv4", 00:28:52.555 "trsvcid": "4420", 00:28:52.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.555 "hdgst": false, 00:28:52.555 "ddgst": false 00:28:52.555 }, 00:28:52.555 "method": "bdev_nvme_attach_controller" 00:28:52.555 }' 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # break 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:52.555 01:05:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.555 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:52.555 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:52.555 fio-3.35 00:28:52.555 Starting 2 threads 00:28:52.555 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.764 00:29:04.764 filename0: (groupid=0, jobs=1): err= 0: pid=3653463: Wed May 15 01:05:49 2024 00:29:04.764 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10003msec) 00:29:04.764 slat (nsec): min=3873, max=22977, avg=6741.56, stdev=1192.79 00:29:04.764 clat (usec): min=450, max=42531, avg=21217.30, stdev=20520.09 00:29:04.764 lat (usec): min=456, max=42537, avg=21224.04, stdev=20519.75 00:29:04.764 clat percentiles (usec): 00:29:04.764 | 1.00th=[ 457], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 474], 00:29:04.764 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[41157], 60.00th=[41681], 00:29:04.765 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:04.765 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:04.765 | 99.99th=[42730] 00:29:04.765 bw ( KiB/s): min= 608, max= 768, per=49.90%, avg=754.53, stdev=38.92, samples=19 00:29:04.765 iops : min= 152, max= 192, avg=188.63, stdev= 9.73, samples=19 00:29:04.765 lat (usec) : 500=44.32%, 750=5.15% 00:29:04.765 lat (msec) : 50=50.53% 00:29:04.765 cpu : usr=98.31%, sys=1.42%, ctx=14, majf=0, minf=1634 00:29:04.765 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.765 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.765 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:04.765 filename1: (groupid=0, jobs=1): err= 0: pid=3653464: Wed May 15 01:05:49 2024 00:29:04.765 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10006msec) 00:29:04.765 slat (nsec): min=4188, max=20594, avg=6711.27, stdev=1042.99 00:29:04.765 clat (usec): min=437, max=42602, avg=21090.94, stdev=20538.81 00:29:04.765 lat (usec): min=443, max=42610, avg=21097.65, stdev=20538.56 00:29:04.765 clat percentiles (usec): 00:29:04.765 | 1.00th=[ 449], 5.00th=[ 453], 10.00th=[ 457], 20.00th=[ 465], 00:29:04.765 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[40633], 60.00th=[41681], 00:29:04.765 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:04.765 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:04.765 | 99.99th=[42730] 00:29:04.765 bw ( KiB/s): min= 672, max= 768, per=50.03%, avg=756.80, stdev=28.00, samples=20 00:29:04.765 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:29:04.765 lat (usec) : 500=47.47%, 750=2.11%, 1000=0.21% 00:29:04.765 lat (msec) : 50=50.21% 00:29:04.765 cpu : usr=98.33%, sys=1.40%, ctx=14, majf=0, minf=1634 00:29:04.765 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.765 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.765 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:04.765 00:29:04.765 Run status group 0 (all jobs): 00:29:04.765 READ: bw=1511KiB/s (1547kB/s), 753KiB/s-758KiB/s (771kB/s-776kB/s), io=14.8MiB (15.5MB), run=10003-10006msec 00:29:04.765 ----------------------------------------------------- 00:29:04.765 Suppressions used: 00:29:04.765 count bytes template 00:29:04.765 2 16 /usr/src/fio/parse.c 00:29:04.765 1 8 libtcmalloc_minimal.so 00:29:04.765 1 904 libcrypto.so 00:29:04.765 ----------------------------------------------------- 00:29:04.765 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 00:29:04.765 real 0m12.102s 00:29:04.765 user 0m38.861s 00:29:04.765 sys 0m0.717s 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 ************************************ 00:29:04.765 END TEST fio_dif_1_multi_subsystems 00:29:04.765 ************************************ 00:29:04.765 01:05:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:04.765 01:05:50 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:04.765 01:05:50 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 ************************************ 00:29:04.765 START TEST fio_dif_rand_params 00:29:04.765 ************************************ 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 bdev_null0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 [2024-05-15 01:05:50.820281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:04.765 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:04.765 { 00:29:04.765 "params": { 00:29:04.765 "name": "Nvme$subsystem", 00:29:04.765 "trtype": "$TEST_TRANSPORT", 00:29:04.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.765 "adrfam": "ipv4", 00:29:04.765 "trsvcid": "$NVMF_PORT", 00:29:04.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.765 "hdgst": ${hdgst:-false}, 00:29:04.765 "ddgst": ${ddgst:-false} 00:29:04.765 }, 00:29:04.765 "method": "bdev_nvme_attach_controller" 00:29:04.766 } 00:29:04.766 EOF 00:29:04.766 )") 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:04.766 "params": { 00:29:04.766 "name": "Nvme0", 00:29:04.766 "trtype": "tcp", 00:29:04.766 "traddr": "10.0.0.2", 00:29:04.766 "adrfam": "ipv4", 00:29:04.766 "trsvcid": "4420", 00:29:04.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.766 "hdgst": false, 00:29:04.766 "ddgst": false 00:29:04.766 }, 00:29:04.766 "method": "bdev_nvme_attach_controller" 00:29:04.766 }' 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # break 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:04.766 01:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.766 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:04.766 ... 00:29:04.766 fio-3.35 00:29:04.766 Starting 3 threads 00:29:04.766 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.031 00:29:10.031 filename0: (groupid=0, jobs=1): err= 0: pid=3655844: Wed May 15 01:05:57 2024 00:29:10.031 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(202MiB/5044msec) 00:29:10.031 slat (nsec): min=4927, max=21611, avg=7336.25, stdev=1090.58 00:29:10.031 clat (usec): min=3112, max=50859, avg=9296.91, stdev=9114.51 00:29:10.031 lat (usec): min=3118, max=50867, avg=9304.24, stdev=9114.59 00:29:10.031 clat percentiles (usec): 00:29:10.031 | 1.00th=[ 3458], 5.00th=[ 4146], 10.00th=[ 5342], 20.00th=[ 5866], 00:29:10.031 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7767], 60.00th=[ 8291], 00:29:10.031 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[44303], 00:29:10.031 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:29:10.031 | 99.99th=[51119] 00:29:10.031 bw ( KiB/s): min=24064, max=55552, per=35.02%, avg=41318.40, stdev=10006.37, samples=10 00:29:10.031 iops : min= 188, max= 434, avg=322.80, stdev=78.17, samples=10 00:29:10.031 lat (msec) : 4=4.14%, 10=89.61%, 20=1.18%, 50=4.70%, 100=0.37% 00:29:10.031 cpu : usr=97.10%, sys=2.64%, ctx=6, majf=0, minf=1637 00:29:10.031 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 issued rwts: total=1617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.031 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:10.031 filename0: (groupid=0, jobs=1): err= 0: pid=3655845: Wed May 15 01:05:57 2024 00:29:10.031 read: IOPS=288, BW=36.0MiB/s (37.8MB/s)(180MiB/5002msec) 00:29:10.031 slat (nsec): min=6121, max=27305, avg=8468.31, stdev=2252.15 00:29:10.031 clat (usec): min=3268, max=88234, avg=10402.38, stdev=10248.35 00:29:10.031 lat (usec): min=3274, max=88241, avg=10410.85, stdev=10248.08 00:29:10.031 clat percentiles (usec): 00:29:10.031 | 1.00th=[ 3556], 5.00th=[ 4113], 10.00th=[ 5473], 20.00th=[ 6194], 00:29:10.031 | 30.00th=[ 6521], 40.00th=[ 7373], 50.00th=[ 8586], 60.00th=[ 9110], 00:29:10.031 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[46924], 00:29:10.031 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52691], 99.95th=[88605], 00:29:10.031 | 99.99th=[88605] 00:29:10.031 bw ( KiB/s): min=26368, max=54016, per=31.22%, avg=36838.40, stdev=10199.71, samples=10 00:29:10.031 iops : min= 206, max= 422, avg=287.80, stdev=79.69, samples=10 00:29:10.031 lat (msec) : 4=4.44%, 10=78.63%, 20=10.76%, 50=4.02%, 100=2.15% 00:29:10.031 cpu : usr=97.52%, sys=2.18%, ctx=7, majf=0, minf=1634 00:29:10.031 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 issued rwts: total=1441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.031 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:10.031 filename0: (groupid=0, jobs=1): err= 0: pid=3655846: Wed May 15 01:05:57 2024 00:29:10.031 read: IOPS=315, BW=39.4MiB/s (41.4MB/s)(199MiB/5042msec) 00:29:10.031 slat (usec): min=6, max=140, avg= 8.39, stdev= 3.96 00:29:10.031 clat (usec): min=3334, max=87974, avg=9471.56, stdev=8904.71 00:29:10.031 lat (usec): min=3340, max=87982, avg=9479.95, stdev=8904.73 00:29:10.031 clat percentiles (usec): 00:29:10.031 | 1.00th=[ 3490], 5.00th=[ 4178], 10.00th=[ 5342], 20.00th=[ 5932], 00:29:10.031 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 8094], 60.00th=[ 8848], 00:29:10.031 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[12387], 00:29:10.031 | 99.00th=[49546], 99.50th=[51119], 99.90th=[87557], 99.95th=[87557], 00:29:10.031 | 99.99th=[87557] 00:29:10.031 bw ( KiB/s): min=28416, max=49408, per=34.48%, avg=40678.40, stdev=6452.07, samples=10 00:29:10.031 iops : min= 222, max= 386, avg=317.80, stdev=50.41, samples=10 00:29:10.031 lat (msec) : 4=4.09%, 10=79.51%, 20=12.13%, 50=3.46%, 100=0.82% 00:29:10.031 cpu : usr=95.12%, sys=3.53%, ctx=431, majf=0, minf=1633 00:29:10.031 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.031 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.031 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:10.031 00:29:10.031 Run status group 0 (all jobs): 00:29:10.031 READ: bw=115MiB/s (121MB/s), 36.0MiB/s-40.1MiB/s (37.8MB/s-42.0MB/s), io=581MiB (609MB), run=5002-5044msec 00:29:10.600 ----------------------------------------------------- 00:29:10.600 Suppressions used: 00:29:10.600 count bytes template 00:29:10.600 5 44 /usr/src/fio/parse.c 00:29:10.600 1 8 libtcmalloc_minimal.so 00:29:10.600 1 904 libcrypto.so 00:29:10.600 ----------------------------------------------------- 00:29:10.600 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:10.600 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 bdev_null0 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 [2024-05-15 01:05:57.503802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 bdev_null1 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 bdev_null2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.601 { 00:29:10.601 "params": { 00:29:10.601 "name": "Nvme$subsystem", 00:29:10.601 "trtype": "$TEST_TRANSPORT", 00:29:10.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.601 "adrfam": "ipv4", 00:29:10.601 "trsvcid": "$NVMF_PORT", 00:29:10.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.601 "hdgst": ${hdgst:-false}, 00:29:10.601 "ddgst": ${ddgst:-false} 00:29:10.601 }, 00:29:10.601 "method": "bdev_nvme_attach_controller" 00:29:10.601 } 00:29:10.601 EOF 00:29:10.601 )") 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.601 { 00:29:10.601 "params": { 00:29:10.601 "name": "Nvme$subsystem", 00:29:10.601 "trtype": "$TEST_TRANSPORT", 00:29:10.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.601 "adrfam": "ipv4", 00:29:10.601 "trsvcid": "$NVMF_PORT", 00:29:10.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.601 "hdgst": ${hdgst:-false}, 00:29:10.601 "ddgst": ${ddgst:-false} 00:29:10.601 }, 00:29:10.601 "method": "bdev_nvme_attach_controller" 00:29:10.601 } 00:29:10.601 EOF 00:29:10.601 )") 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.601 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.602 { 00:29:10.602 "params": { 00:29:10.602 "name": "Nvme$subsystem", 00:29:10.602 "trtype": "$TEST_TRANSPORT", 00:29:10.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.602 "adrfam": "ipv4", 00:29:10.602 "trsvcid": "$NVMF_PORT", 00:29:10.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.602 "hdgst": ${hdgst:-false}, 00:29:10.602 "ddgst": ${ddgst:-false} 00:29:10.602 }, 00:29:10.602 "method": "bdev_nvme_attach_controller" 00:29:10.602 } 00:29:10.602 EOF 00:29:10.602 )") 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # break 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:10.602 01:05:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.602 "params": { 00:29:10.602 "name": "Nvme0", 00:29:10.602 "trtype": "tcp", 00:29:10.602 "traddr": "10.0.0.2", 00:29:10.602 "adrfam": "ipv4", 00:29:10.602 "trsvcid": "4420", 00:29:10.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.602 "hdgst": false, 00:29:10.602 "ddgst": false 00:29:10.602 }, 00:29:10.602 "method": "bdev_nvme_attach_controller" 00:29:10.602 },{ 00:29:10.602 "params": { 00:29:10.602 "name": "Nvme1", 00:29:10.602 "trtype": "tcp", 00:29:10.602 "traddr": "10.0.0.2", 00:29:10.602 "adrfam": "ipv4", 00:29:10.602 "trsvcid": "4420", 00:29:10.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.602 "hdgst": false, 00:29:10.602 "ddgst": false 00:29:10.602 }, 00:29:10.602 "method": "bdev_nvme_attach_controller" 00:29:10.602 },{ 00:29:10.602 "params": { 00:29:10.602 "name": "Nvme2", 00:29:10.602 "trtype": "tcp", 00:29:10.602 "traddr": "10.0.0.2", 00:29:10.602 "adrfam": "ipv4", 00:29:10.602 "trsvcid": "4420", 00:29:10.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:10.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:10.602 "hdgst": false, 00:29:10.602 "ddgst": false 00:29:10.602 }, 00:29:10.602 "method": "bdev_nvme_attach_controller" 00:29:10.602 }' 00:29:11.167 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:11.167 ... 00:29:11.167 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:11.167 ... 00:29:11.167 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:11.167 ... 00:29:11.167 fio-3.35 00:29:11.167 Starting 24 threads 00:29:11.167 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.394 00:29:23.394 filename0: (groupid=0, jobs=1): err= 0: pid=3657315: Wed May 15 01:06:08 2024 00:29:23.394 read: IOPS=503, BW=2012KiB/s (2060kB/s)(19.7MiB/10019msec) 00:29:23.394 slat (nsec): min=4276, max=99481, avg=20139.63, stdev=17829.92 00:29:23.394 clat (usec): min=4670, max=41468, avg=31639.26, stdev=2925.80 00:29:23.394 lat (usec): min=4675, max=41480, avg=31659.40, stdev=2926.30 00:29:23.394 clat percentiles (usec): 00:29:23.394 | 1.00th=[ 8848], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:29:23.394 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:29:23.394 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:29:23.394 | 99.00th=[34341], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:29:23.394 | 99.99th=[41681] 00:29:23.394 bw ( KiB/s): min= 1920, max= 2304, per=4.21%, avg=2009.60, stdev=93.78, samples=20 00:29:23.394 iops : min= 480, max= 576, avg=502.40, stdev=23.45, samples=20 00:29:23.394 lat (msec) : 10=1.23%, 20=0.04%, 50=98.73% 00:29:23.394 cpu : usr=98.46%, sys=0.82%, ctx=40, majf=0, minf=1634 00:29:23.394 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:23.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.394 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.394 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.394 filename0: (groupid=0, jobs=1): err= 0: pid=3657316: Wed May 15 01:06:08 2024 00:29:23.394 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10013msec) 00:29:23.394 slat (usec): min=5, max=101, avg=26.12, stdev=20.04 00:29:23.394 clat (usec): min=21330, max=62933, avg=31921.68, stdev=1880.97 00:29:23.394 lat (usec): min=21343, max=62983, avg=31947.80, stdev=1880.90 00:29:23.394 clat percentiles (usec): 00:29:23.394 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.394 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.394 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.394 | 99.00th=[32900], 99.50th=[35390], 99.90th=[62653], 99.95th=[63177], 00:29:23.394 | 99.99th=[63177] 00:29:23.394 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1984.15, stdev=77.30, samples=20 00:29:23.394 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.394 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.394 cpu : usr=98.50%, sys=0.79%, ctx=45, majf=0, minf=1633 00:29:23.394 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.394 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.394 filename0: (groupid=0, jobs=1): err= 0: pid=3657317: Wed May 15 01:06:08 2024 00:29:23.394 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10024msec) 00:29:23.394 slat (usec): min=5, max=119, avg=28.06, stdev=21.39 00:29:23.394 clat (usec): min=12657, max=53274, avg=31923.58, stdev=1089.83 00:29:23.395 lat (usec): min=12669, max=53300, avg=31951.64, stdev=1086.20 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:29:23.395 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:29:23.395 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:29:23.395 | 99.00th=[34341], 99.50th=[34341], 99.90th=[43254], 99.95th=[43254], 00:29:23.395 | 99.99th=[53216] 00:29:23.395 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1990.55, stdev=65.17, samples=20 00:29:23.395 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:29:23.395 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:29:23.395 cpu : usr=98.64%, sys=0.75%, ctx=109, majf=0, minf=1638 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename0: (groupid=0, jobs=1): err= 0: pid=3657318: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10001msec) 00:29:23.395 slat (usec): min=4, max=103, avg=27.83, stdev=12.35 00:29:23.395 clat (usec): min=18984, max=87627, avg=32013.91, stdev=2561.65 00:29:23.395 lat (usec): min=19026, max=87648, avg=32041.74, stdev=2560.50 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.395 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.395 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.395 | 99.00th=[32900], 99.50th=[34341], 99.90th=[74974], 99.95th=[74974], 00:29:23.395 | 99.99th=[87557] 00:29:23.395 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.395 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.395 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:29:23.395 cpu : usr=98.93%, sys=0.60%, ctx=41, majf=0, minf=1634 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename0: (groupid=0, jobs=1): err= 0: pid=3657319: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:29:23.395 slat (usec): min=5, max=108, avg=37.18, stdev=15.03 00:29:23.395 clat (usec): min=27712, max=60980, avg=31872.18, stdev=1690.94 00:29:23.395 lat (usec): min=27758, max=61000, avg=31909.36, stdev=1689.80 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:29:23.395 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.395 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:29:23.395 | 99.00th=[33424], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:29:23.395 | 99.99th=[61080] 00:29:23.395 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.395 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.395 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.395 cpu : usr=98.93%, sys=0.59%, ctx=48, majf=0, minf=1634 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename0: (groupid=0, jobs=1): err= 0: pid=3657320: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10001msec) 00:29:23.395 slat (nsec): min=3933, max=65625, avg=26907.17, stdev=11815.44 00:29:23.395 clat (usec): min=28962, max=74479, avg=32006.60, stdev=2432.11 00:29:23.395 lat (usec): min=28975, max=74499, avg=32033.51, stdev=2431.16 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.395 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.395 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.395 | 99.00th=[32900], 99.50th=[34341], 99.90th=[74974], 99.95th=[74974], 00:29:23.395 | 99.99th=[74974] 00:29:23.395 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.395 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.395 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.395 cpu : usr=98.90%, sys=0.66%, ctx=13, majf=0, minf=1634 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename0: (groupid=0, jobs=1): err= 0: pid=3657321: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10004msec) 00:29:23.395 slat (nsec): min=4267, max=64735, avg=18182.16, stdev=12960.11 00:29:23.395 clat (usec): min=19153, max=77817, avg=32144.22, stdev=2687.93 00:29:23.395 lat (usec): min=19162, max=77838, avg=32162.40, stdev=2687.08 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:29:23.395 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:29:23.395 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:29:23.395 | 99.00th=[33817], 99.50th=[34866], 99.90th=[78119], 99.95th=[78119], 00:29:23.395 | 99.99th=[78119] 00:29:23.395 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1980.79, stdev=77.91, samples=19 00:29:23.395 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.395 lat (msec) : 20=0.12%, 50=99.56%, 100=0.32% 00:29:23.395 cpu : usr=98.45%, sys=0.84%, ctx=54, majf=0, minf=1634 00:29:23.395 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename0: (groupid=0, jobs=1): err= 0: pid=3657322: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10020msec) 00:29:23.395 slat (usec): min=3, max=142, avg=37.82, stdev=16.37 00:29:23.395 clat (usec): min=27757, max=61751, avg=31894.15, stdev=1734.89 00:29:23.395 lat (usec): min=27811, max=61772, avg=31931.98, stdev=1732.93 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:29:23.395 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.395 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.395 | 99.00th=[33424], 99.50th=[34341], 99.90th=[61604], 99.95th=[61604], 00:29:23.395 | 99.99th=[61604] 00:29:23.395 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.395 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.395 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.395 cpu : usr=98.26%, sys=0.91%, ctx=138, majf=0, minf=1634 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename1: (groupid=0, jobs=1): err= 0: pid=3657323: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10003msec) 00:29:23.395 slat (usec): min=5, max=116, avg=24.00, stdev=13.68 00:29:23.395 clat (usec): min=18975, max=89453, avg=32087.14, stdev=2683.89 00:29:23.395 lat (usec): min=19002, max=89478, avg=32111.15, stdev=2682.65 00:29:23.395 clat percentiles (usec): 00:29:23.395 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:29:23.395 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:29:23.395 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.395 | 99.00th=[33817], 99.50th=[34866], 99.90th=[76022], 99.95th=[76022], 00:29:23.395 | 99.99th=[89654] 00:29:23.395 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.395 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.395 lat (msec) : 20=0.08%, 50=99.60%, 100=0.32% 00:29:23.395 cpu : usr=98.40%, sys=0.84%, ctx=93, majf=0, minf=1636 00:29:23.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.395 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.395 filename1: (groupid=0, jobs=1): err= 0: pid=3657324: Wed May 15 01:06:08 2024 00:29:23.395 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10008msec) 00:29:23.395 slat (nsec): min=5718, max=67551, avg=10295.39, stdev=4800.62 00:29:23.395 clat (usec): min=7107, max=38308, avg=31677.64, stdev=2871.27 00:29:23.395 lat (usec): min=7120, max=38341, avg=31687.94, stdev=2870.72 00:29:23.395 clat percentiles (usec): 00:29:23.396 | 1.00th=[ 8717], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:29:23.396 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:29:23.396 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:29:23.396 | 99.00th=[32900], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:29:23.396 | 99.99th=[38536] 00:29:23.396 bw ( KiB/s): min= 1920, max= 2299, per=4.22%, avg=2014.05, stdev=93.03, samples=19 00:29:23.396 iops : min= 480, max= 574, avg=503.47, stdev=23.13, samples=19 00:29:23.396 lat (msec) : 10=1.27%, 20=0.32%, 50=98.41% 00:29:23.396 cpu : usr=98.46%, sys=0.84%, ctx=28, majf=0, minf=1637 00:29:23.396 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657325: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=501, BW=2006KiB/s (2055kB/s)(19.6MiB/10016msec) 00:29:23.396 slat (nsec): min=3883, max=98043, avg=24071.33, stdev=18574.50 00:29:23.396 clat (usec): min=3744, max=39099, avg=31692.85, stdev=2369.55 00:29:23.396 lat (usec): min=3752, max=39108, avg=31716.92, stdev=2370.40 00:29:23.396 clat percentiles (usec): 00:29:23.396 | 1.00th=[20317], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.396 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:29:23.396 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.396 | 99.00th=[32900], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:29:23.396 | 99.99th=[39060] 00:29:23.396 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2003.20, stdev=75.15, samples=20 00:29:23.396 iops : min= 480, max= 544, avg=500.80, stdev=18.79, samples=20 00:29:23.396 lat (msec) : 4=0.04%, 10=0.60%, 20=0.36%, 50=99.00% 00:29:23.396 cpu : usr=98.91%, sys=0.56%, ctx=72, majf=0, minf=1636 00:29:23.396 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657326: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10023msec) 00:29:23.396 slat (usec): min=5, max=118, avg=38.93, stdev=18.90 00:29:23.396 clat (usec): min=27560, max=64711, avg=31896.88, stdev=1900.87 00:29:23.396 lat (usec): min=27572, max=64744, avg=31935.81, stdev=1898.92 00:29:23.396 clat percentiles (usec): 00:29:23.396 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:29:23.396 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.396 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.396 | 99.00th=[33424], 99.50th=[34341], 99.90th=[64750], 99.95th=[64750], 00:29:23.396 | 99.99th=[64750] 00:29:23.396 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.396 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.396 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.396 cpu : usr=98.49%, sys=0.70%, ctx=74, majf=0, minf=1637 00:29:23.396 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657327: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:29:23.396 slat (nsec): min=6976, max=98993, avg=25663.94, stdev=19560.91 00:29:23.396 clat (usec): min=21339, max=83147, avg=31993.47, stdev=2978.08 00:29:23.396 lat (usec): min=21349, max=83175, avg=32019.14, stdev=2977.64 00:29:23.396 clat percentiles (usec): 00:29:23.396 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.396 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.396 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.396 | 99.00th=[32900], 99.50th=[35390], 99.90th=[83362], 99.95th=[83362], 00:29:23.396 | 99.99th=[83362] 00:29:23.396 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.396 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.396 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.396 cpu : usr=98.80%, sys=0.73%, ctx=17, majf=0, minf=1635 00:29:23.396 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657328: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=497, BW=1992KiB/s (2039kB/s)(19.5MiB/10006msec) 00:29:23.396 slat (usec): min=6, max=117, avg=35.67, stdev=17.48 00:29:23.396 clat (msec): min=15, max=107, avg=31.80, stdev= 3.98 00:29:23.396 lat (msec): min=15, max=107, avg=31.83, stdev= 3.98 00:29:23.396 clat percentiles (msec): 00:29:23.396 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:29:23.396 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:29:23.396 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 33], 00:29:23.396 | 99.00th=[ 35], 99.50th=[ 39], 99.90th=[ 93], 99.95th=[ 93], 00:29:23.396 | 99.99th=[ 108] 00:29:23.396 bw ( KiB/s): min= 1664, max= 2096, per=4.17%, avg=1989.89, stdev=101.08, samples=19 00:29:23.396 iops : min= 416, max= 524, avg=497.47, stdev=25.27, samples=19 00:29:23.396 lat (msec) : 20=0.56%, 50=99.12%, 100=0.28%, 250=0.04% 00:29:23.396 cpu : usr=98.95%, sys=0.60%, ctx=13, majf=0, minf=1634 00:29:23.396 IO depths : 1=5.5%, 2=11.5%, 4=24.1%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657329: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:29:23.396 slat (usec): min=4, max=112, avg=39.08, stdev=16.41 00:29:23.396 clat (usec): min=27726, max=60897, avg=31867.37, stdev=1687.56 00:29:23.396 lat (usec): min=27764, max=60920, avg=31906.45, stdev=1686.06 00:29:23.396 clat percentiles (usec): 00:29:23.396 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:29:23.396 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.396 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:29:23.396 | 99.00th=[33424], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:29:23.396 | 99.99th=[61080] 00:29:23.396 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.396 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.396 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.396 cpu : usr=98.95%, sys=0.56%, ctx=92, majf=0, minf=1636 00:29:23.396 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.396 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.396 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.396 filename1: (groupid=0, jobs=1): err= 0: pid=3657330: Wed May 15 01:06:08 2024 00:29:23.396 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.4MiB/10011msec) 00:29:23.396 slat (nsec): min=4259, max=67018, avg=28187.12, stdev=13115.39 00:29:23.396 clat (usec): min=11952, max=72591, avg=31924.48, stdev=2583.19 00:29:23.396 lat (usec): min=11960, max=72613, avg=31952.66, stdev=2582.82 00:29:23.396 clat percentiles (usec): 00:29:23.396 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.396 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.396 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.396 | 99.00th=[32900], 99.50th=[34341], 99.90th=[72877], 99.95th=[72877], 00:29:23.396 | 99.99th=[72877] 00:29:23.397 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.397 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.397 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:29:23.397 cpu : usr=97.51%, sys=1.34%, ctx=728, majf=0, minf=1636 00:29:23.397 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657331: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.4MiB/10011msec) 00:29:23.397 slat (nsec): min=4084, max=64984, avg=27379.10, stdev=12735.10 00:29:23.397 clat (usec): min=11979, max=85487, avg=31930.66, stdev=2687.30 00:29:23.397 lat (usec): min=11988, max=85510, avg=31958.04, stdev=2686.98 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.397 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.397 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.397 | 99.00th=[32900], 99.50th=[34341], 99.90th=[72877], 99.95th=[72877], 00:29:23.397 | 99.99th=[85459] 00:29:23.397 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.397 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.397 lat (msec) : 20=0.36%, 50=99.32%, 100=0.32% 00:29:23.397 cpu : usr=98.43%, sys=0.77%, ctx=133, majf=0, minf=1634 00:29:23.397 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657332: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=497, BW=1992KiB/s (2040kB/s)(19.5MiB/10025msec) 00:29:23.397 slat (usec): min=6, max=103, avg=16.40, stdev=12.26 00:29:23.397 clat (usec): min=12664, max=54076, avg=32005.22, stdev=1114.26 00:29:23.397 lat (usec): min=12676, max=54106, avg=32021.62, stdev=1112.60 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[31851], 00:29:23.397 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:29:23.397 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.397 | 99.00th=[33817], 99.50th=[34341], 99.90th=[44303], 99.95th=[44303], 00:29:23.397 | 99.99th=[54264] 00:29:23.397 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1990.55, stdev=65.17, samples=20 00:29:23.397 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:29:23.397 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:29:23.397 cpu : usr=99.11%, sys=0.51%, ctx=13, majf=0, minf=1635 00:29:23.397 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657333: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10018msec) 00:29:23.397 slat (nsec): min=3927, max=96741, avg=14481.88, stdev=12258.51 00:29:23.397 clat (usec): min=4029, max=38361, avg=31787.08, stdev=2290.01 00:29:23.397 lat (usec): min=4038, max=38373, avg=31801.56, stdev=2289.75 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[20055], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:29:23.397 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:29:23.397 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:29:23.397 | 99.00th=[32900], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:29:23.397 | 99.99th=[38536] 00:29:23.397 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2003.20, stdev=75.15, samples=20 00:29:23.397 iops : min= 480, max= 544, avg=500.80, stdev=18.79, samples=20 00:29:23.397 lat (msec) : 10=0.60%, 20=0.36%, 50=99.04% 00:29:23.397 cpu : usr=98.96%, sys=0.58%, ctx=54, majf=0, minf=1635 00:29:23.397 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657334: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10013msec) 00:29:23.397 slat (usec): min=4, max=113, avg=20.67, stdev=11.40 00:29:23.397 clat (usec): min=21355, max=63249, avg=32019.32, stdev=1892.22 00:29:23.397 lat (usec): min=21372, max=63271, avg=32039.99, stdev=1891.69 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:29:23.397 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.397 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.397 | 99.00th=[32900], 99.50th=[35390], 99.90th=[63177], 99.95th=[63177], 00:29:23.397 | 99.99th=[63177] 00:29:23.397 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.397 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.397 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.397 cpu : usr=99.12%, sys=0.48%, ctx=23, majf=0, minf=1635 00:29:23.397 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657335: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:29:23.397 slat (usec): min=3, max=109, avg=35.96, stdev=14.17 00:29:23.397 clat (usec): min=27751, max=60889, avg=31885.78, stdev=1684.21 00:29:23.397 lat (usec): min=27786, max=60911, avg=31921.74, stdev=1682.97 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.397 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.397 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:29:23.397 | 99.00th=[33424], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:29:23.397 | 99.99th=[61080] 00:29:23.397 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1984.00, stdev=77.69, samples=20 00:29:23.397 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:29:23.397 lat (msec) : 50=99.68%, 100=0.32% 00:29:23.397 cpu : usr=99.13%, sys=0.44%, ctx=12, majf=0, minf=1635 00:29:23.397 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657336: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10002msec) 00:29:23.397 slat (nsec): min=5396, max=88787, avg=26276.66, stdev=12362.62 00:29:23.397 clat (usec): min=18521, max=75489, avg=32042.81, stdev=2678.69 00:29:23.397 lat (usec): min=18544, max=75515, avg=32069.09, stdev=2677.64 00:29:23.397 clat percentiles (usec): 00:29:23.397 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:29:23.397 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:29:23.397 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.397 | 99.00th=[34341], 99.50th=[44827], 99.90th=[74974], 99.95th=[74974], 00:29:23.397 | 99.99th=[74974] 00:29:23.397 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1980.79, stdev=77.91, samples=19 00:29:23.397 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.397 lat (msec) : 20=0.28%, 50=99.40%, 100=0.32% 00:29:23.397 cpu : usr=98.82%, sys=0.68%, ctx=65, majf=0, minf=1633 00:29:23.397 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:23.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.397 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.397 filename2: (groupid=0, jobs=1): err= 0: pid=3657337: Wed May 15 01:06:08 2024 00:29:23.397 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10018msec) 00:29:23.397 slat (usec): min=6, max=100, avg=35.93, stdev=14.39 00:29:23.397 clat (usec): min=15221, max=78668, avg=31874.02, stdev=2467.12 00:29:23.398 lat (usec): min=15230, max=78693, avg=31909.94, stdev=2466.63 00:29:23.398 clat percentiles (usec): 00:29:23.398 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.398 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.398 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:29:23.398 | 99.00th=[33817], 99.50th=[34341], 99.90th=[69731], 99.95th=[69731], 00:29:23.398 | 99.99th=[79168] 00:29:23.398 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1980.63, stdev=78.31, samples=19 00:29:23.398 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:29:23.398 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:29:23.398 cpu : usr=98.92%, sys=0.61%, ctx=30, majf=0, minf=1635 00:29:23.398 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:23.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.398 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.398 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.398 filename2: (groupid=0, jobs=1): err= 0: pid=3657338: Wed May 15 01:06:08 2024 00:29:23.398 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:29:23.398 slat (usec): min=5, max=106, avg=36.97, stdev=15.15 00:29:23.398 clat (usec): min=27609, max=43972, avg=31834.96, stdev=784.15 00:29:23.398 lat (usec): min=27623, max=44002, avg=31871.92, stdev=782.58 00:29:23.398 clat percentiles (usec): 00:29:23.398 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:29:23.398 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:29:23.398 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:29:23.398 | 99.00th=[33424], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:29:23.398 | 99.99th=[43779] 00:29:23.398 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1987.37, stdev=65.66, samples=19 00:29:23.398 iops : min= 480, max= 512, avg=496.84, stdev=16.42, samples=19 00:29:23.398 lat (msec) : 50=100.00% 00:29:23.398 cpu : usr=99.13%, sys=0.44%, ctx=13, majf=0, minf=1634 00:29:23.398 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:23.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.398 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.398 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:23.398 00:29:23.398 Run status group 0 (all jobs): 00:29:23.398 READ: bw=46.6MiB/s (48.9MB/s), 1983KiB/s-2014KiB/s (2030kB/s-2063kB/s), io=467MiB (490MB), run=10001-10025msec 00:29:23.398 ----------------------------------------------------- 00:29:23.398 Suppressions used: 00:29:23.398 count bytes template 00:29:23.398 45 402 /usr/src/fio/parse.c 00:29:23.398 1 8 libtcmalloc_minimal.so 00:29:23.398 1 904 libcrypto.so 00:29:23.398 ----------------------------------------------------- 00:29:23.398 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 bdev_null0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.398 [2024-05-15 01:06:09.648036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.398 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.399 bdev_null1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.399 { 00:29:23.399 "params": { 00:29:23.399 "name": "Nvme$subsystem", 00:29:23.399 "trtype": "$TEST_TRANSPORT", 00:29:23.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.399 "adrfam": "ipv4", 00:29:23.399 "trsvcid": "$NVMF_PORT", 00:29:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.399 "hdgst": ${hdgst:-false}, 00:29:23.399 "ddgst": ${ddgst:-false} 00:29:23.399 }, 00:29:23.399 "method": "bdev_nvme_attach_controller" 00:29:23.399 } 00:29:23.399 EOF 00:29:23.399 )") 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.399 { 00:29:23.399 "params": { 00:29:23.399 "name": "Nvme$subsystem", 00:29:23.399 "trtype": "$TEST_TRANSPORT", 00:29:23.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.399 "adrfam": "ipv4", 00:29:23.399 "trsvcid": "$NVMF_PORT", 00:29:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.399 "hdgst": ${hdgst:-false}, 00:29:23.399 "ddgst": ${ddgst:-false} 00:29:23.399 }, 00:29:23.399 "method": "bdev_nvme_attach_controller" 00:29:23.399 } 00:29:23.399 EOF 00:29:23.399 )") 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.399 "params": { 00:29:23.399 "name": "Nvme0", 00:29:23.399 "trtype": "tcp", 00:29:23.399 "traddr": "10.0.0.2", 00:29:23.399 "adrfam": "ipv4", 00:29:23.399 "trsvcid": "4420", 00:29:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.399 "hdgst": false, 00:29:23.399 "ddgst": false 00:29:23.399 }, 00:29:23.399 "method": "bdev_nvme_attach_controller" 00:29:23.399 },{ 00:29:23.399 "params": { 00:29:23.399 "name": "Nvme1", 00:29:23.399 "trtype": "tcp", 00:29:23.399 "traddr": "10.0.0.2", 00:29:23.399 "adrfam": "ipv4", 00:29:23.399 "trsvcid": "4420", 00:29:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.399 "hdgst": false, 00:29:23.399 "ddgst": false 00:29:23.399 }, 00:29:23.399 "method": "bdev_nvme_attach_controller" 00:29:23.399 }' 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # break 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:23.399 01:06:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.399 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:23.399 ... 00:29:23.399 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:23.399 ... 00:29:23.399 fio-3.35 00:29:23.399 Starting 4 threads 00:29:23.399 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.956 00:29:29.956 filename0: (groupid=0, jobs=1): err= 0: pid=3660431: Wed May 15 01:06:15 2024 00:29:29.956 read: IOPS=2602, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:29:29.956 slat (usec): min=5, max=157, avg= 9.46, stdev= 5.73 00:29:29.956 clat (usec): min=640, max=10267, avg=3043.64, stdev=543.73 00:29:29.956 lat (usec): min=648, max=10301, avg=3053.10, stdev=544.06 00:29:29.956 clat percentiles (usec): 00:29:29.956 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2638], 00:29:29.956 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3163], 00:29:29.956 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3654], 95.00th=[ 3785], 00:29:29.956 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 6128], 99.95th=[10028], 00:29:29.956 | 99.99th=[10159] 00:29:29.956 bw ( KiB/s): min=19264, max=23328, per=26.25%, avg=20894.22, stdev=1362.10, samples=9 00:29:29.957 iops : min= 2408, max= 2916, avg=2611.78, stdev=170.26, samples=9 00:29:29.957 lat (usec) : 750=0.02%, 1000=0.08% 00:29:29.957 lat (msec) : 2=2.01%, 4=94.94%, 10=2.90%, 20=0.04% 00:29:29.957 cpu : usr=97.36%, sys=2.32%, ctx=11, majf=0, minf=1634 00:29:29.957 IO depths : 1=0.3%, 2=13.7%, 4=58.0%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 issued rwts: total=13013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:29.957 filename0: (groupid=0, jobs=1): err= 0: pid=3660432: Wed May 15 01:06:15 2024 00:29:29.957 read: IOPS=2467, BW=19.3MiB/s (20.2MB/s)(96.4MiB/5001msec) 00:29:29.957 slat (nsec): min=6024, max=77476, avg=9090.38, stdev=5329.98 00:29:29.957 clat (usec): min=577, max=7573, avg=3212.77, stdev=595.51 00:29:29.957 lat (usec): min=586, max=7598, avg=3221.86, stdev=595.47 00:29:29.957 clat percentiles (usec): 00:29:29.957 | 1.00th=[ 1991], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2835], 00:29:29.957 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3163], 60.00th=[ 3228], 00:29:29.957 | 70.00th=[ 3392], 80.00th=[ 3589], 90.00th=[ 3818], 95.00th=[ 4228], 00:29:29.957 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 6652], 99.95th=[ 7308], 00:29:29.957 | 99.99th=[ 7373] 00:29:29.957 bw ( KiB/s): min=18400, max=22208, per=24.82%, avg=19758.22, stdev=1225.24, samples=9 00:29:29.957 iops : min= 2300, max= 2776, avg=2469.78, stdev=153.15, samples=9 00:29:29.957 lat (usec) : 750=0.04%, 1000=0.06% 00:29:29.957 lat (msec) : 2=0.96%, 4=91.70%, 10=7.25% 00:29:29.957 cpu : usr=97.14%, sys=2.54%, ctx=5, majf=0, minf=1637 00:29:29.957 IO depths : 1=0.1%, 2=12.4%, 4=59.4%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 issued rwts: total=12342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:29.957 filename1: (groupid=0, jobs=1): err= 0: pid=3660433: Wed May 15 01:06:15 2024 00:29:29.957 read: IOPS=2507, BW=19.6MiB/s (20.5MB/s)(98.0MiB/5002msec) 00:29:29.957 slat (nsec): min=6024, max=77440, avg=9924.20, stdev=5957.78 00:29:29.957 clat (usec): min=648, max=8366, avg=3158.14, stdev=563.49 00:29:29.957 lat (usec): min=654, max=8395, avg=3168.07, stdev=563.71 00:29:29.957 clat percentiles (usec): 00:29:29.957 | 1.00th=[ 1942], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2769], 00:29:29.957 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3195], 00:29:29.957 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 3752], 95.00th=[ 4047], 00:29:29.957 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6456], 99.95th=[ 8160], 00:29:29.957 | 99.99th=[ 8160] 00:29:29.957 bw ( KiB/s): min=18800, max=21888, per=25.24%, avg=20092.44, stdev=953.24, samples=9 00:29:29.957 iops : min= 2350, max= 2736, avg=2511.56, stdev=119.15, samples=9 00:29:29.957 lat (usec) : 750=0.01%, 1000=0.10% 00:29:29.957 lat (msec) : 2=1.16%, 4=93.29%, 10=5.44% 00:29:29.957 cpu : usr=97.20%, sys=2.48%, ctx=7, majf=0, minf=1635 00:29:29.957 IO depths : 1=0.2%, 2=14.1%, 4=57.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 issued rwts: total=12541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:29.957 filename1: (groupid=0, jobs=1): err= 0: pid=3660434: Wed May 15 01:06:15 2024 00:29:29.957 read: IOPS=2373, BW=18.5MiB/s (19.4MB/s)(92.8MiB/5002msec) 00:29:29.957 slat (usec): min=3, max=149, avg=11.56, stdev= 7.43 00:29:29.957 clat (usec): min=658, max=8094, avg=3334.59, stdev=632.86 00:29:29.957 lat (usec): min=666, max=8111, avg=3346.15, stdev=632.86 00:29:29.957 clat percentiles (usec): 00:29:29.957 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2900], 00:29:29.957 | 30.00th=[ 3032], 40.00th=[ 3130], 50.00th=[ 3228], 60.00th=[ 3326], 00:29:29.957 | 70.00th=[ 3490], 80.00th=[ 3687], 90.00th=[ 4080], 95.00th=[ 4621], 00:29:29.957 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6783], 99.95th=[ 7832], 00:29:29.957 | 99.99th=[ 7832] 00:29:29.957 bw ( KiB/s): min=18192, max=20432, per=24.07%, avg=19157.33, stdev=664.96, samples=9 00:29:29.957 iops : min= 2274, max= 2554, avg=2394.67, stdev=83.12, samples=9 00:29:29.957 lat (usec) : 750=0.03%, 1000=0.13% 00:29:29.957 lat (msec) : 2=0.61%, 4=88.23%, 10=11.01% 00:29:29.957 cpu : usr=95.04%, sys=3.20%, ctx=86, majf=0, minf=1640 00:29:29.957 IO depths : 1=0.1%, 2=9.2%, 4=62.0%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.957 issued rwts: total=11874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:29.957 00:29:29.957 Run status group 0 (all jobs): 00:29:29.957 READ: bw=77.7MiB/s (81.5MB/s), 18.5MiB/s-20.3MiB/s (19.4MB/s-21.3MB/s), io=389MiB (408MB), run=5001-5002msec 00:29:29.957 ----------------------------------------------------- 00:29:29.957 Suppressions used: 00:29:29.957 count bytes template 00:29:29.957 6 52 /usr/src/fio/parse.c 00:29:29.957 1 8 libtcmalloc_minimal.so 00:29:29.957 1 904 libcrypto.so 00:29:29.957 ----------------------------------------------------- 00:29:29.957 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 00:29:29.957 real 0m25.740s 00:29:29.957 user 5m17.986s 00:29:29.957 sys 0m4.012s 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 ************************************ 00:29:29.957 END TEST fio_dif_rand_params 00:29:29.957 ************************************ 00:29:29.957 01:06:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:29.957 01:06:16 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:29.957 01:06:16 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 ************************************ 00:29:29.957 START TEST fio_dif_digest 00:29:29.957 ************************************ 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 bdev_null0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.957 [2024-05-15 01:06:16.629401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.957 { 00:29:29.957 "params": { 00:29:29.957 "name": "Nvme$subsystem", 00:29:29.957 "trtype": "$TEST_TRANSPORT", 00:29:29.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.957 "adrfam": "ipv4", 00:29:29.957 "trsvcid": "$NVMF_PORT", 00:29:29.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.957 "hdgst": ${hdgst:-false}, 00:29:29.957 "ddgst": ${ddgst:-false} 00:29:29.957 }, 00:29:29.957 "method": "bdev_nvme_attach_controller" 00:29:29.957 } 00:29:29.957 EOF 00:29:29.957 )") 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.957 "params": { 00:29:29.957 "name": "Nvme0", 00:29:29.957 "trtype": "tcp", 00:29:29.957 "traddr": "10.0.0.2", 00:29:29.957 "adrfam": "ipv4", 00:29:29.957 "trsvcid": "4420", 00:29:29.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.957 "hdgst": true, 00:29:29.957 "ddgst": true 00:29:29.957 }, 00:29:29.957 "method": "bdev_nvme_attach_controller" 00:29:29.957 }' 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # break 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:29.957 01:06:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.216 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:30.216 ... 00:29:30.216 fio-3.35 00:29:30.216 Starting 3 threads 00:29:30.216 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.413 00:29:42.413 filename0: (groupid=0, jobs=1): err= 0: pid=3662044: Wed May 15 01:06:27 2024 00:29:42.413 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(354MiB/10006msec) 00:29:42.413 slat (nsec): min=4721, max=22646, avg=7761.44, stdev=1054.81 00:29:42.413 clat (usec): min=4593, max=14910, avg=10604.79, stdev=1178.69 00:29:42.413 lat (usec): min=4598, max=14918, avg=10612.55, stdev=1178.70 00:29:42.413 clat percentiles (usec): 00:29:42.413 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:29:42.413 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:29:42.413 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12256], 95.00th=[12780], 00:29:42.413 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14615], 99.95th=[14877], 00:29:42.413 | 99.99th=[14877] 00:29:42.413 bw ( KiB/s): min=33280, max=39168, per=34.77%, avg=36160.00, stdev=1788.87, samples=20 00:29:42.413 iops : min= 260, max= 306, avg=282.50, stdev=13.98, samples=20 00:29:42.413 lat (msec) : 10=32.53%, 20=67.47% 00:29:42.413 cpu : usr=96.36%, sys=3.34%, ctx=15, majf=0, minf=1634 00:29:42.413 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.413 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.413 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.413 filename0: (groupid=0, jobs=1): err= 0: pid=3662045: Wed May 15 01:06:27 2024 00:29:42.413 read: IOPS=267, BW=33.4MiB/s (35.1MB/s)(336MiB/10047msec) 00:29:42.413 slat (nsec): min=5042, max=21882, avg=7743.72, stdev=1097.81 00:29:42.413 clat (usec): min=8487, max=49999, avg=11192.18, stdev=1750.94 00:29:42.413 lat (usec): min=8496, max=50007, avg=11199.93, stdev=1751.00 00:29:42.413 clat percentiles (usec): 00:29:42.413 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:29:42.413 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:29:42.413 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13304], 95.00th=[13960], 00:29:42.413 | 99.00th=[14877], 99.50th=[15401], 99.90th=[21890], 99.95th=[47449], 00:29:42.413 | 99.99th=[50070] 00:29:42.413 bw ( KiB/s): min=31232, max=37120, per=33.03%, avg=34355.20, stdev=2008.20, samples=20 00:29:42.413 iops : min= 244, max= 290, avg=268.40, stdev=15.69, samples=20 00:29:42.413 lat (msec) : 10=18.87%, 20=80.95%, 50=0.19% 00:29:42.413 cpu : usr=96.52%, sys=3.19%, ctx=13, majf=0, minf=1638 00:29:42.414 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.414 issued rwts: total=2687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.414 filename0: (groupid=0, jobs=1): err= 0: pid=3662047: Wed May 15 01:06:27 2024 00:29:42.414 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(331MiB/10046msec) 00:29:42.414 slat (nsec): min=4761, max=20585, avg=7714.39, stdev=1025.73 00:29:42.414 clat (usec): min=8028, max=55059, avg=11351.43, stdev=1741.06 00:29:42.414 lat (usec): min=8035, max=55066, avg=11359.15, stdev=1741.09 00:29:42.414 clat percentiles (usec): 00:29:42.414 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:29:42.414 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:29:42.414 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13304], 95.00th=[13960], 00:29:42.414 | 99.00th=[14746], 99.50th=[15008], 99.90th=[20841], 99.95th=[50070], 00:29:42.414 | 99.99th=[55313] 00:29:42.414 bw ( KiB/s): min=30464, max=36352, per=32.58%, avg=33885.05, stdev=1810.24, samples=20 00:29:42.414 iops : min= 238, max= 284, avg=264.70, stdev=14.13, samples=20 00:29:42.414 lat (msec) : 10=11.78%, 20=88.11%, 50=0.08%, 100=0.04% 00:29:42.414 cpu : usr=95.98%, sys=3.72%, ctx=13, majf=0, minf=1634 00:29:42.414 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.414 issued rwts: total=2649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:42.414 00:29:42.414 Run status group 0 (all jobs): 00:29:42.414 READ: bw=102MiB/s (107MB/s), 33.0MiB/s-35.3MiB/s (34.6MB/s-37.0MB/s), io=1021MiB (1070MB), run=10006-10047msec 00:29:42.414 ----------------------------------------------------- 00:29:42.414 Suppressions used: 00:29:42.414 count bytes template 00:29:42.414 5 44 /usr/src/fio/parse.c 00:29:42.414 1 8 libtcmalloc_minimal.so 00:29:42.414 1 904 libcrypto.so 00:29:42.414 ----------------------------------------------------- 00:29:42.414 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.414 00:29:42.414 real 0m11.838s 00:29:42.414 user 0m44.371s 00:29:42.414 sys 0m1.456s 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:42.414 01:06:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:42.414 ************************************ 00:29:42.414 END TEST fio_dif_digest 00:29:42.414 ************************************ 00:29:42.414 01:06:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:42.414 01:06:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:42.414 rmmod nvme_tcp 00:29:42.414 rmmod nvme_fabrics 00:29:42.414 rmmod nvme_keyring 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3650341 ']' 00:29:42.414 01:06:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3650341 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3650341 ']' 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3650341 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3650341 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3650341' 00:29:42.414 killing process with pid 3650341 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3650341 00:29:42.414 [2024-05-15 01:06:28.560565] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:42.414 01:06:28 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3650341 00:29:42.414 01:06:29 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:42.414 01:06:29 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:44.942 Waiting for block devices as requested 00:29:44.942 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:29:44.942 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.942 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.942 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.942 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:44.942 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.942 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:44.942 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.200 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.200 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.200 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.200 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.200 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.459 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.459 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.459 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.459 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.717 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:29:45.975 01:06:32 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.975 01:06:32 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.975 01:06:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.975 01:06:32 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.975 01:06:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.975 01:06:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:45.975 01:06:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.879 01:06:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:47.879 00:29:47.879 real 1m17.257s 00:29:47.879 user 8m10.625s 00:29:47.879 sys 0m16.675s 00:29:47.879 01:06:34 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:47.879 01:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.879 ************************************ 00:29:47.879 END TEST nvmf_dif 00:29:47.879 ************************************ 00:29:47.879 01:06:34 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:47.879 01:06:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:47.879 01:06:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.879 01:06:34 -- common/autotest_common.sh@10 -- # set +x 00:29:47.879 ************************************ 00:29:47.879 START TEST nvmf_abort_qd_sizes 00:29:47.879 ************************************ 00:29:47.879 01:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:48.137 * Looking for test storage... 00:29:48.137 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.137 01:06:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.138 01:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:53.401 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:53.401 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:53.401 Found net devices under 0000:27:00.0: cvl_0_0 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:53.401 Found net devices under 0000:27:00.1: cvl_0_1 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.401 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.751 ms 00:29:53.402 00:29:53.402 --- 10.0.0.2 ping statistics --- 00:29:53.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.402 rtt min/avg/max/mdev = 0.751/0.751/0.751/0.000 ms 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:29:53.402 00:29:53.402 --- 10.0.0.1 ping statistics --- 00:29:53.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.402 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:53.402 01:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:29:55.930 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:55.930 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:55.930 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:55.930 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:29:55.930 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:55.930 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:29:55.930 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:56.187 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.187 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:56.187 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.187 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.187 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.187 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:56.187 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.187 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:56.187 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:29:56.754 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:29:57.059 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3671424 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3671424 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3671424 ']' 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.322 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:57.322 [2024-05-15 01:06:44.244521] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:29:57.322 [2024-05-15 01:06:44.244619] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.322 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.322 [2024-05-15 01:06:44.365555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.580 [2024-05-15 01:06:44.470112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.580 [2024-05-15 01:06:44.470155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.580 [2024-05-15 01:06:44.470165] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.580 [2024-05-15 01:06:44.470175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.580 [2024-05-15 01:06:44.470182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.580 [2024-05-15 01:06:44.470295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.580 [2024-05-15 01:06:44.470302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.580 [2024-05-15 01:06:44.470401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.580 [2024-05-15 01:06:44.470411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:03:00.0 0000:c9:00.0 ]] 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:03:00.0 ]] 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:58.147 01:06:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:03:00.0 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:58.147 01:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:58.147 ************************************ 00:29:58.147 START TEST spdk_target_abort 00:29:58.147 ************************************ 00:29:58.147 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:29:58.147 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:58.147 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:03:00.0 -b spdk_target 00:29:58.147 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.147 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 spdk_targetn1 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 [2024-05-15 01:06:45.428358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 [2024-05-15 01:06:45.460315] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:58.406 [2024-05-15 01:06:45.460589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:58.406 01:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:58.664 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.948 Initializing NVMe Controllers 00:30:01.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:01.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:01.948 Initialization complete. Launching workers. 00:30:01.948 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17426, failed: 0 00:30:01.948 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2013, failed to submit 15413 00:30:01.948 success 716, unsuccess 1297, failed 0 00:30:01.948 01:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:01.948 01:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:01.948 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.227 [2024-05-15 01:06:52.106359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:30:05.227 Initializing NVMe Controllers 00:30:05.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:05.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:05.227 Initialization complete. Launching workers. 00:30:05.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8799, failed: 0 00:30:05.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 7530 00:30:05.228 success 342, unsuccess 927, failed 0 00:30:05.228 01:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:05.228 01:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:05.228 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.506 Initializing NVMe Controllers 00:30:08.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:08.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:08.506 Initialization complete. Launching workers. 00:30:08.506 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38914, failed: 0 00:30:08.506 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2570, failed to submit 36344 00:30:08.506 success 611, unsuccess 1959, failed 0 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.506 01:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3671424 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3671424 ']' 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3671424 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3671424 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3671424' 00:30:09.438 killing process with pid 3671424 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3671424 00:30:09.438 [2024-05-15 01:06:56.309308] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:09.438 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3671424 00:30:09.695 00:30:09.695 real 0m11.639s 00:30:09.695 user 0m47.176s 00:30:09.695 sys 0m1.262s 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:09.695 ************************************ 00:30:09.695 END TEST spdk_target_abort 00:30:09.695 ************************************ 00:30:09.695 01:06:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:09.695 01:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:09.695 01:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:09.695 01:06:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:09.695 ************************************ 00:30:09.695 START TEST kernel_target_abort 00:30:09.695 ************************************ 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:30:09.695 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:09.696 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:09.953 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:09.953 01:06:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:30:12.481 Waiting for block devices as requested 00:30:12.481 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:30:12.481 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.481 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.481 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.481 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.481 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.481 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.740 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.740 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.740 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.740 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.999 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.999 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:30:12.999 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:12.999 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:30:13.256 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:13.256 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:30:13.256 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:14.193 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:14.194 No valid GPT data, bailing 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:30:14.194 No valid GPT data, bailing 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:30:14.194 00:30:14.194 Discovery Log Number of Records 2, Generation counter 2 00:30:14.194 =====Discovery Log Entry 0====== 00:30:14.194 trtype: tcp 00:30:14.194 adrfam: ipv4 00:30:14.194 subtype: current discovery subsystem 00:30:14.194 treq: not specified, sq flow control disable supported 00:30:14.194 portid: 1 00:30:14.194 trsvcid: 4420 00:30:14.194 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:14.194 traddr: 10.0.0.1 00:30:14.194 eflags: none 00:30:14.194 sectype: none 00:30:14.194 =====Discovery Log Entry 1====== 00:30:14.194 trtype: tcp 00:30:14.194 adrfam: ipv4 00:30:14.194 subtype: nvme subsystem 00:30:14.194 treq: not specified, sq flow control disable supported 00:30:14.194 portid: 1 00:30:14.194 trsvcid: 4420 00:30:14.194 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:14.194 traddr: 10.0.0.1 00:30:14.194 eflags: none 00:30:14.194 sectype: none 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:14.194 01:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.755 Initializing NVMe Controllers 00:30:17.755 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:17.755 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:17.755 Initialization complete. Launching workers. 00:30:17.755 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85868, failed: 0 00:30:17.755 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85868, failed to submit 0 00:30:17.755 success 0, unsuccess 85868, failed 0 00:30:17.755 01:07:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:17.755 01:07:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:17.755 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.035 Initializing NVMe Controllers 00:30:21.035 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:21.035 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:21.035 Initialization complete. Launching workers. 00:30:21.035 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138474, failed: 0 00:30:21.035 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35038, failed to submit 103436 00:30:21.035 success 0, unsuccess 35038, failed 0 00:30:21.035 01:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:21.035 01:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:21.035 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.626 Initializing NVMe Controllers 00:30:23.626 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:23.626 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:23.626 Initialization complete. Launching workers. 00:30:23.626 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133268, failed: 0 00:30:23.626 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33326, failed to submit 99942 00:30:23.626 success 0, unsuccess 33326, failed 0 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:23.626 01:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:30:26.918 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:30:26.918 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:26.918 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:30:27.177 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:30:27.437 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:30:27.695 00:30:27.695 real 0m17.949s 00:30:27.695 user 0m9.129s 00:30:27.695 sys 0m4.837s 00:30:27.695 01:07:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:27.695 01:07:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.695 ************************************ 00:30:27.695 END TEST kernel_target_abort 00:30:27.695 ************************************ 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.695 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.695 rmmod nvme_tcp 00:30:27.695 rmmod nvme_fabrics 00:30:27.953 rmmod nvme_keyring 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3671424 ']' 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3671424 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3671424 ']' 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3671424 00:30:27.953 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3671424) - No such process 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3671424 is not found' 00:30:27.953 Process with pid 3671424 is not found 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:27.953 01:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:30:30.484 Waiting for block devices as requested 00:30:30.484 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:30:30.484 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:30.485 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:30.743 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:30.743 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:30:30.743 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:30.743 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.003 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:31.003 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.003 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:31.003 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.262 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.262 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.262 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:31.262 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.522 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:31.522 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:30:31.522 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:31.780 01:07:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.314 01:07:20 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:34.314 00:30:34.314 real 0m45.869s 00:30:34.314 user 0m59.899s 00:30:34.314 sys 0m13.683s 00:30:34.314 01:07:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:34.314 01:07:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:34.314 ************************************ 00:30:34.314 END TEST nvmf_abort_qd_sizes 00:30:34.314 ************************************ 00:30:34.314 01:07:20 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:30:34.314 01:07:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:34.314 01:07:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.314 01:07:20 -- common/autotest_common.sh@10 -- # set +x 00:30:34.314 ************************************ 00:30:34.314 START TEST keyring_file 00:30:34.314 ************************************ 00:30:34.314 01:07:20 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:30:34.314 * Looking for test storage... 00:30:34.314 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:30:34.314 01:07:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:30:34.314 01:07:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.314 01:07:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:34.314 01:07:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:34.315 01:07:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.315 01:07:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.315 01:07:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.315 01:07:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.315 01:07:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.315 01:07:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.315 01:07:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:34.315 01:07:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Q4cqNCdDdF 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Q4cqNCdDdF 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Q4cqNCdDdF 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Q4cqNCdDdF 00:30:34.315 01:07:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.p0Aom03zKf 00:30:34.315 01:07:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:34.315 01:07:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:34.315 01:07:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.p0Aom03zKf 00:30:34.315 01:07:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.p0Aom03zKf 00:30:34.315 01:07:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.p0Aom03zKf 00:30:34.315 01:07:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=3681746 00:30:34.315 01:07:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3681746 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3681746 ']' 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:34.315 01:07:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:34.315 01:07:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:30:34.315 [2024-05-15 01:07:21.114191] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:30:34.315 [2024-05-15 01:07:21.114308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681746 ] 00:30:34.315 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.315 [2024-05-15 01:07:21.226889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.315 [2024-05-15 01:07:21.319713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:34.883 01:07:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:34.883 [2024-05-15 01:07:21.799408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.883 null0 00:30:34.883 [2024-05-15 01:07:21.831341] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:34.883 [2024-05-15 01:07:21.831418] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:34.883 [2024-05-15 01:07:21.831595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:34.883 [2024-05-15 01:07:21.839392] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.883 01:07:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:34.883 [2024-05-15 01:07:21.851376] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:34.883 request: 00:30:34.883 { 00:30:34.883 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.883 "secure_channel": false, 00:30:34.883 "listen_address": { 00:30:34.883 "trtype": "tcp", 00:30:34.883 "traddr": "127.0.0.1", 00:30:34.883 "trsvcid": "4420" 00:30:34.883 }, 00:30:34.883 "method": "nvmf_subsystem_add_listener", 00:30:34.883 "req_id": 1 00:30:34.883 } 00:30:34.883 Got JSON-RPC error response 00:30:34.883 response: 00:30:34.883 { 00:30:34.883 "code": -32602, 00:30:34.883 "message": "Invalid parameters" 00:30:34.883 } 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:34.883 01:07:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=3681868 00:30:34.883 01:07:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3681868 /var/tmp/bperf.sock 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3681868 ']' 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:34.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:34.883 01:07:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:34.883 01:07:21 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:34.883 [2024-05-15 01:07:21.928412] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:30:34.883 [2024-05-15 01:07:21.928519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681868 ] 00:30:35.143 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.143 [2024-05-15 01:07:22.063958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.143 [2024-05-15 01:07:22.205469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.712 01:07:22 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:35.712 01:07:22 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:35.712 01:07:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:35.712 01:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:35.712 01:07:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p0Aom03zKf 00:30:35.712 01:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p0Aom03zKf 00:30:35.971 01:07:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:35.971 01:07:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:35.971 01:07:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:35.971 01:07:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:35.971 01:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.229 01:07:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Q4cqNCdDdF == \/\t\m\p\/\t\m\p\.\Q\4\c\q\N\C\d\D\d\F ]] 00:30:36.229 01:07:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:36.229 01:07:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.229 01:07:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.p0Aom03zKf == \/\t\m\p\/\t\m\p\.\p\0\A\o\m\0\3\z\K\f ]] 00:30:36.229 01:07:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:36.229 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.487 01:07:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:36.487 01:07:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.487 01:07:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:36.487 01:07:23 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:36.487 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:36.745 [2024-05-15 01:07:23.585906] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:36.745 nvme0n1 00:30:36.745 01:07:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:36.745 01:07:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:36.745 01:07:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.745 01:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.745 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.745 01:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:37.003 01:07:23 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:37.003 01:07:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:37.004 01:07:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:37.004 01:07:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:37.004 01:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:37.004 01:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:37.004 01:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:37.004 01:07:23 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:37.004 01:07:23 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:37.004 Running I/O for 1 seconds... 00:30:38.381 00:30:38.381 Latency(us) 00:30:38.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.381 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:38.381 nvme0n1 : 1.00 17809.53 69.57 0.00 0.00 7167.64 3604.48 17246.32 00:30:38.381 =================================================================================================================== 00:30:38.381 Total : 17809.53 69.57 0.00 0.00 7167.64 3604.48 17246.32 00:30:38.381 0 00:30:38.381 01:07:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:38.381 01:07:25 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:38.381 01:07:25 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:38.381 01:07:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:38.381 01:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:38.639 01:07:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:38.639 01:07:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:38.639 [2024-05-15 01:07:25.594678] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:38.639 [2024-05-15 01:07:25.594908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a8980 (107): Transport endpoint is not connected 00:30:38.639 [2024-05-15 01:07:25.595888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a8980 (9): Bad file descriptor 00:30:38.639 [2024-05-15 01:07:25.596885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.639 [2024-05-15 01:07:25.596901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:38.639 [2024-05-15 01:07:25.596911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.639 request: 00:30:38.639 { 00:30:38.639 "name": "nvme0", 00:30:38.639 "trtype": "tcp", 00:30:38.639 "traddr": "127.0.0.1", 00:30:38.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.639 "adrfam": "ipv4", 00:30:38.639 "trsvcid": "4420", 00:30:38.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.639 "psk": "key1", 00:30:38.639 "method": "bdev_nvme_attach_controller", 00:30:38.639 "req_id": 1 00:30:38.639 } 00:30:38.639 Got JSON-RPC error response 00:30:38.639 response: 00:30:38.639 { 00:30:38.639 "code": -32602, 00:30:38.639 "message": "Invalid parameters" 00:30:38.639 } 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:38.639 01:07:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:38.639 01:07:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:38.639 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:38.898 01:07:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:38.898 01:07:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:38.898 01:07:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:38.898 01:07:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:38.898 01:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:39.157 01:07:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:39.157 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:39.157 01:07:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:39.157 01:07:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:39.157 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:39.417 01:07:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:39.417 01:07:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Q4cqNCdDdF 00:30:39.417 01:07:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.417 01:07:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:39.417 01:07:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.417 01:07:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:39.417 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.417 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.418 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.418 [2024-05-15 01:07:26.443930] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q4cqNCdDdF': 0100660 00:30:39.418 [2024-05-15 01:07:26.443969] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:39.418 request: 00:30:39.418 { 00:30:39.418 "name": "key0", 00:30:39.418 "path": "/tmp/tmp.Q4cqNCdDdF", 00:30:39.418 "method": "keyring_file_add_key", 00:30:39.418 "req_id": 1 00:30:39.418 } 00:30:39.418 Got JSON-RPC error response 00:30:39.418 response: 00:30:39.418 { 00:30:39.418 "code": -1, 00:30:39.418 "message": "Operation not permitted" 00:30:39.418 } 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.418 01:07:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.418 01:07:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Q4cqNCdDdF 00:30:39.418 01:07:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.418 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q4cqNCdDdF 00:30:39.677 01:07:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Q4cqNCdDdF 00:30:39.677 01:07:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:39.677 01:07:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:39.677 01:07:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:39.677 01:07:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:39.677 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:39.677 01:07:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:39.938 01:07:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:39.938 01:07:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.938 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.938 [2024-05-15 01:07:26.904092] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Q4cqNCdDdF': No such file or directory 00:30:39.938 [2024-05-15 01:07:26.904124] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:39.938 [2024-05-15 01:07:26.904150] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:39.938 [2024-05-15 01:07:26.904159] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:39.938 [2024-05-15 01:07:26.904169] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:39.938 request: 00:30:39.938 { 00:30:39.938 "name": "nvme0", 00:30:39.938 "trtype": "tcp", 00:30:39.938 "traddr": "127.0.0.1", 00:30:39.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.938 "adrfam": "ipv4", 00:30:39.938 "trsvcid": "4420", 00:30:39.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.938 "psk": "key0", 00:30:39.938 "method": "bdev_nvme_attach_controller", 00:30:39.938 "req_id": 1 00:30:39.938 } 00:30:39.938 Got JSON-RPC error response 00:30:39.938 response: 00:30:39.938 { 00:30:39.938 "code": -19, 00:30:39.938 "message": "No such device" 00:30:39.938 } 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.938 01:07:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.938 01:07:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:39.938 01:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:40.197 01:07:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qEVHX3JSBT 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:40.197 01:07:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qEVHX3JSBT 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qEVHX3JSBT 00:30:40.197 01:07:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.qEVHX3JSBT 00:30:40.197 01:07:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qEVHX3JSBT 00:30:40.197 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qEVHX3JSBT 00:30:40.455 01:07:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:40.455 nvme0n1 00:30:40.455 01:07:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.455 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.713 01:07:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:40.713 01:07:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:40.713 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:40.713 01:07:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:40.713 01:07:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:40.713 01:07:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.713 01:07:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.713 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.972 01:07:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:40.972 01:07:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:40.972 01:07:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:40.972 01:07:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:40.972 01:07:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.972 01:07:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.972 01:07:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.972 01:07:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:40.972 01:07:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:40.972 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:41.231 01:07:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:41.231 01:07:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:41.231 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.491 01:07:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:41.491 01:07:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qEVHX3JSBT 00:30:41.491 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qEVHX3JSBT 00:30:41.491 01:07:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.p0Aom03zKf 00:30:41.491 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.p0Aom03zKf 00:30:41.751 01:07:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:41.751 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:41.751 nvme0n1 00:30:41.751 01:07:28 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:41.751 01:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:42.011 01:07:28 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:42.011 "subsystems": [ 00:30:42.011 { 00:30:42.011 "subsystem": "keyring", 00:30:42.011 "config": [ 00:30:42.011 { 00:30:42.011 "method": "keyring_file_add_key", 00:30:42.011 "params": { 00:30:42.011 "name": "key0", 00:30:42.011 "path": "/tmp/tmp.qEVHX3JSBT" 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "keyring_file_add_key", 00:30:42.011 "params": { 00:30:42.011 "name": "key1", 00:30:42.011 "path": "/tmp/tmp.p0Aom03zKf" 00:30:42.011 } 00:30:42.011 } 00:30:42.011 ] 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "subsystem": "iobuf", 00:30:42.011 "config": [ 00:30:42.011 { 00:30:42.011 "method": "iobuf_set_options", 00:30:42.011 "params": { 00:30:42.011 "small_pool_count": 8192, 00:30:42.011 "large_pool_count": 1024, 00:30:42.011 "small_bufsize": 8192, 00:30:42.011 "large_bufsize": 135168 00:30:42.011 } 00:30:42.011 } 00:30:42.011 ] 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "subsystem": "sock", 00:30:42.011 "config": [ 00:30:42.011 { 00:30:42.011 "method": "sock_impl_set_options", 00:30:42.011 "params": { 00:30:42.011 "impl_name": "posix", 00:30:42.011 "recv_buf_size": 2097152, 00:30:42.011 "send_buf_size": 2097152, 00:30:42.011 "enable_recv_pipe": true, 00:30:42.011 "enable_quickack": false, 00:30:42.011 "enable_placement_id": 0, 00:30:42.011 "enable_zerocopy_send_server": true, 00:30:42.011 "enable_zerocopy_send_client": false, 00:30:42.011 "zerocopy_threshold": 0, 00:30:42.011 "tls_version": 0, 00:30:42.011 "enable_ktls": false 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "sock_impl_set_options", 00:30:42.011 "params": { 00:30:42.011 "impl_name": "ssl", 00:30:42.011 "recv_buf_size": 4096, 00:30:42.011 "send_buf_size": 4096, 00:30:42.011 "enable_recv_pipe": true, 00:30:42.011 "enable_quickack": false, 00:30:42.011 "enable_placement_id": 0, 00:30:42.011 "enable_zerocopy_send_server": true, 00:30:42.011 "enable_zerocopy_send_client": false, 00:30:42.011 "zerocopy_threshold": 0, 00:30:42.011 "tls_version": 0, 00:30:42.011 "enable_ktls": false 00:30:42.011 } 00:30:42.011 } 00:30:42.011 ] 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "subsystem": "vmd", 00:30:42.011 "config": [] 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "subsystem": "accel", 00:30:42.011 "config": [ 00:30:42.011 { 00:30:42.011 "method": "accel_set_options", 00:30:42.011 "params": { 00:30:42.011 "small_cache_size": 128, 00:30:42.011 "large_cache_size": 16, 00:30:42.011 "task_count": 2048, 00:30:42.011 "sequence_count": 2048, 00:30:42.011 "buf_count": 2048 00:30:42.011 } 00:30:42.011 } 00:30:42.011 ] 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "subsystem": "bdev", 00:30:42.011 "config": [ 00:30:42.011 { 00:30:42.011 "method": "bdev_set_options", 00:30:42.011 "params": { 00:30:42.011 "bdev_io_pool_size": 65535, 00:30:42.011 "bdev_io_cache_size": 256, 00:30:42.011 "bdev_auto_examine": true, 00:30:42.011 "iobuf_small_cache_size": 128, 00:30:42.011 "iobuf_large_cache_size": 16 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "bdev_raid_set_options", 00:30:42.011 "params": { 00:30:42.011 "process_window_size_kb": 1024 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "bdev_iscsi_set_options", 00:30:42.011 "params": { 00:30:42.011 "timeout_sec": 30 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "bdev_nvme_set_options", 00:30:42.011 "params": { 00:30:42.011 "action_on_timeout": "none", 00:30:42.011 "timeout_us": 0, 00:30:42.011 "timeout_admin_us": 0, 00:30:42.011 "keep_alive_timeout_ms": 10000, 00:30:42.011 "arbitration_burst": 0, 00:30:42.011 "low_priority_weight": 0, 00:30:42.011 "medium_priority_weight": 0, 00:30:42.011 "high_priority_weight": 0, 00:30:42.011 "nvme_adminq_poll_period_us": 10000, 00:30:42.011 "nvme_ioq_poll_period_us": 0, 00:30:42.011 "io_queue_requests": 512, 00:30:42.011 "delay_cmd_submit": true, 00:30:42.011 "transport_retry_count": 4, 00:30:42.011 "bdev_retry_count": 3, 00:30:42.011 "transport_ack_timeout": 0, 00:30:42.011 "ctrlr_loss_timeout_sec": 0, 00:30:42.011 "reconnect_delay_sec": 0, 00:30:42.011 "fast_io_fail_timeout_sec": 0, 00:30:42.011 "disable_auto_failback": false, 00:30:42.011 "generate_uuids": false, 00:30:42.011 "transport_tos": 0, 00:30:42.011 "nvme_error_stat": false, 00:30:42.011 "rdma_srq_size": 0, 00:30:42.011 "io_path_stat": false, 00:30:42.011 "allow_accel_sequence": false, 00:30:42.011 "rdma_max_cq_size": 0, 00:30:42.011 "rdma_cm_event_timeout_ms": 0, 00:30:42.011 "dhchap_digests": [ 00:30:42.011 "sha256", 00:30:42.011 "sha384", 00:30:42.011 "sha512" 00:30:42.011 ], 00:30:42.011 "dhchap_dhgroups": [ 00:30:42.011 "null", 00:30:42.011 "ffdhe2048", 00:30:42.011 "ffdhe3072", 00:30:42.011 "ffdhe4096", 00:30:42.011 "ffdhe6144", 00:30:42.011 "ffdhe8192" 00:30:42.011 ] 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "bdev_nvme_attach_controller", 00:30:42.011 "params": { 00:30:42.011 "name": "nvme0", 00:30:42.011 "trtype": "TCP", 00:30:42.011 "adrfam": "IPv4", 00:30:42.011 "traddr": "127.0.0.1", 00:30:42.011 "trsvcid": "4420", 00:30:42.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.011 "prchk_reftag": false, 00:30:42.011 "prchk_guard": false, 00:30:42.011 "ctrlr_loss_timeout_sec": 0, 00:30:42.011 "reconnect_delay_sec": 0, 00:30:42.011 "fast_io_fail_timeout_sec": 0, 00:30:42.011 "psk": "key0", 00:30:42.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.011 "hdgst": false, 00:30:42.011 "ddgst": false 00:30:42.011 } 00:30:42.011 }, 00:30:42.011 { 00:30:42.011 "method": "bdev_nvme_set_hotplug", 00:30:42.011 "params": { 00:30:42.011 "period_us": 100000, 00:30:42.011 "enable": false 00:30:42.011 } 00:30:42.012 }, 00:30:42.012 { 00:30:42.012 "method": "bdev_wait_for_examine" 00:30:42.012 } 00:30:42.012 ] 00:30:42.012 }, 00:30:42.012 { 00:30:42.012 "subsystem": "nbd", 00:30:42.012 "config": [] 00:30:42.012 } 00:30:42.012 ] 00:30:42.012 }' 00:30:42.012 01:07:28 keyring_file -- keyring/file.sh@114 -- # killprocess 3681868 00:30:42.012 01:07:28 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3681868 ']' 00:30:42.012 01:07:28 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3681868 00:30:42.012 01:07:28 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:42.012 01:07:28 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:42.012 01:07:28 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3681868 00:30:42.012 01:07:29 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:42.012 01:07:29 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:42.012 01:07:29 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3681868' 00:30:42.012 killing process with pid 3681868 00:30:42.012 01:07:29 keyring_file -- common/autotest_common.sh@965 -- # kill 3681868 00:30:42.012 Received shutdown signal, test time was about 1.000000 seconds 00:30:42.012 00:30:42.012 Latency(us) 00:30:42.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.012 =================================================================================================================== 00:30:42.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.012 01:07:29 keyring_file -- common/autotest_common.sh@970 -- # wait 3681868 00:30:42.645 01:07:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=3683494 00:30:42.645 01:07:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3683494 /var/tmp/bperf.sock 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3683494 ']' 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:42.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:42.645 01:07:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:42.645 01:07:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:42.645 01:07:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:42.645 "subsystems": [ 00:30:42.645 { 00:30:42.645 "subsystem": "keyring", 00:30:42.645 "config": [ 00:30:42.645 { 00:30:42.645 "method": "keyring_file_add_key", 00:30:42.645 "params": { 00:30:42.645 "name": "key0", 00:30:42.645 "path": "/tmp/tmp.qEVHX3JSBT" 00:30:42.645 } 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "method": "keyring_file_add_key", 00:30:42.645 "params": { 00:30:42.645 "name": "key1", 00:30:42.645 "path": "/tmp/tmp.p0Aom03zKf" 00:30:42.645 } 00:30:42.645 } 00:30:42.645 ] 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "subsystem": "iobuf", 00:30:42.645 "config": [ 00:30:42.645 { 00:30:42.645 "method": "iobuf_set_options", 00:30:42.645 "params": { 00:30:42.645 "small_pool_count": 8192, 00:30:42.645 "large_pool_count": 1024, 00:30:42.645 "small_bufsize": 8192, 00:30:42.645 "large_bufsize": 135168 00:30:42.645 } 00:30:42.645 } 00:30:42.645 ] 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "subsystem": "sock", 00:30:42.645 "config": [ 00:30:42.645 { 00:30:42.645 "method": "sock_impl_set_options", 00:30:42.645 "params": { 00:30:42.645 "impl_name": "posix", 00:30:42.645 "recv_buf_size": 2097152, 00:30:42.645 "send_buf_size": 2097152, 00:30:42.645 "enable_recv_pipe": true, 00:30:42.645 "enable_quickack": false, 00:30:42.645 "enable_placement_id": 0, 00:30:42.645 "enable_zerocopy_send_server": true, 00:30:42.645 "enable_zerocopy_send_client": false, 00:30:42.645 "zerocopy_threshold": 0, 00:30:42.645 "tls_version": 0, 00:30:42.645 "enable_ktls": false 00:30:42.645 } 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "method": "sock_impl_set_options", 00:30:42.645 "params": { 00:30:42.645 "impl_name": "ssl", 00:30:42.645 "recv_buf_size": 4096, 00:30:42.645 "send_buf_size": 4096, 00:30:42.645 "enable_recv_pipe": true, 00:30:42.645 "enable_quickack": false, 00:30:42.645 "enable_placement_id": 0, 00:30:42.645 "enable_zerocopy_send_server": true, 00:30:42.645 "enable_zerocopy_send_client": false, 00:30:42.645 "zerocopy_threshold": 0, 00:30:42.645 "tls_version": 0, 00:30:42.645 "enable_ktls": false 00:30:42.645 } 00:30:42.645 } 00:30:42.645 ] 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "subsystem": "vmd", 00:30:42.645 "config": [] 00:30:42.645 }, 00:30:42.645 { 00:30:42.645 "subsystem": "accel", 00:30:42.645 "config": [ 00:30:42.645 { 00:30:42.645 "method": "accel_set_options", 00:30:42.646 "params": { 00:30:42.646 "small_cache_size": 128, 00:30:42.646 "large_cache_size": 16, 00:30:42.646 "task_count": 2048, 00:30:42.646 "sequence_count": 2048, 00:30:42.646 "buf_count": 2048 00:30:42.646 } 00:30:42.646 } 00:30:42.646 ] 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "subsystem": "bdev", 00:30:42.646 "config": [ 00:30:42.646 { 00:30:42.646 "method": "bdev_set_options", 00:30:42.646 "params": { 00:30:42.646 "bdev_io_pool_size": 65535, 00:30:42.646 "bdev_io_cache_size": 256, 00:30:42.646 "bdev_auto_examine": true, 00:30:42.646 "iobuf_small_cache_size": 128, 00:30:42.646 "iobuf_large_cache_size": 16 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_raid_set_options", 00:30:42.646 "params": { 00:30:42.646 "process_window_size_kb": 1024 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_iscsi_set_options", 00:30:42.646 "params": { 00:30:42.646 "timeout_sec": 30 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_nvme_set_options", 00:30:42.646 "params": { 00:30:42.646 "action_on_timeout": "none", 00:30:42.646 "timeout_us": 0, 00:30:42.646 "timeout_admin_us": 0, 00:30:42.646 "keep_alive_timeout_ms": 10000, 00:30:42.646 "arbitration_burst": 0, 00:30:42.646 "low_priority_weight": 0, 00:30:42.646 "medium_priority_weight": 0, 00:30:42.646 "high_priority_weight": 0, 00:30:42.646 "nvme_adminq_poll_period_us": 10000, 00:30:42.646 "nvme_ioq_poll_period_us": 0, 00:30:42.646 "io_queue_requests": 512, 00:30:42.646 "delay_cmd_submit": true, 00:30:42.646 "transport_retry_count": 4, 00:30:42.646 "bdev_retry_count": 3, 00:30:42.646 "transport_ack_timeout": 0, 00:30:42.646 "ctrlr_loss_timeout_sec": 0, 00:30:42.646 "reconnect_delay_sec": 0, 00:30:42.646 "fast_io_fail_timeout_sec": 0, 00:30:42.646 "disable_auto_failback": false, 00:30:42.646 "generate_uuids": false, 00:30:42.646 "transport_tos": 0, 00:30:42.646 "nvme_error_stat": false, 00:30:42.646 "rdma_srq_size": 0, 00:30:42.646 "io_path_stat": false, 00:30:42.646 "allow_accel_sequence": false, 00:30:42.646 "rdma_max_cq_size": 0, 00:30:42.646 "rdma_cm_event_timeout_ms": 0, 00:30:42.646 "dhchap_digests": [ 00:30:42.646 "sha256", 00:30:42.646 "sha384", 00:30:42.646 "sha512" 00:30:42.646 ], 00:30:42.646 "dhchap_dhgroups": [ 00:30:42.646 "null", 00:30:42.646 "ffdhe2048", 00:30:42.646 "ffdhe3072", 00:30:42.646 "ffdhe4096", 00:30:42.646 "ffdhe6144", 00:30:42.646 "ffdhe8192" 00:30:42.646 ] 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_nvme_attach_controller", 00:30:42.646 "params": { 00:30:42.646 "name": "nvme0", 00:30:42.646 "trtype": "TCP", 00:30:42.646 "adrfam": "IPv4", 00:30:42.646 "traddr": "127.0.0.1", 00:30:42.646 "trsvcid": "4420", 00:30:42.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.646 "prchk_reftag": false, 00:30:42.646 "prchk_guard": false, 00:30:42.646 "ctrlr_loss_timeout_sec": 0, 00:30:42.646 "reconnect_delay_sec": 0, 00:30:42.646 "fast_io_fail_timeout_sec": 0, 00:30:42.646 "psk": "key0", 00:30:42.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.646 "hdgst": false, 00:30:42.646 "ddgst": false 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_nvme_set_hotplug", 00:30:42.646 "params": { 00:30:42.646 "period_us": 100000, 00:30:42.646 "enable": false 00:30:42.646 } 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "method": "bdev_wait_for_examine" 00:30:42.646 } 00:30:42.646 ] 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "subsystem": "nbd", 00:30:42.646 "config": [] 00:30:42.646 } 00:30:42.646 ] 00:30:42.646 }' 00:30:42.646 [2024-05-15 01:07:29.457378] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:30:42.646 [2024-05-15 01:07:29.457503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683494 ] 00:30:42.646 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.646 [2024-05-15 01:07:29.567318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.646 [2024-05-15 01:07:29.662626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.904 [2024-05-15 01:07:29.877874] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:43.163 01:07:30 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:43.163 01:07:30 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:43.163 01:07:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:43.163 01:07:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.163 01:07:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:43.421 01:07:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:43.421 01:07:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.421 01:07:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:43.421 01:07:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:43.421 01:07:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.679 01:07:30 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:43.679 01:07:30 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:43.679 01:07:30 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:43.679 01:07:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:43.938 01:07:30 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:43.938 01:07:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:43.938 01:07:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qEVHX3JSBT /tmp/tmp.p0Aom03zKf 00:30:43.938 01:07:30 keyring_file -- keyring/file.sh@20 -- # killprocess 3683494 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3683494 ']' 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3683494 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3683494 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3683494' 00:30:43.939 killing process with pid 3683494 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@965 -- # kill 3683494 00:30:43.939 Received shutdown signal, test time was about 1.000000 seconds 00:30:43.939 00:30:43.939 Latency(us) 00:30:43.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.939 =================================================================================================================== 00:30:43.939 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:43.939 01:07:30 keyring_file -- common/autotest_common.sh@970 -- # wait 3683494 00:30:44.199 01:07:31 keyring_file -- keyring/file.sh@21 -- # killprocess 3681746 00:30:44.199 01:07:31 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3681746 ']' 00:30:44.199 01:07:31 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3681746 00:30:44.199 01:07:31 keyring_file -- common/autotest_common.sh@951 -- # uname 00:30:44.199 01:07:31 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3681746 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3681746' 00:30:44.200 killing process with pid 3681746 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@965 -- # kill 3681746 00:30:44.200 [2024-05-15 01:07:31.195802] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:44.200 [2024-05-15 01:07:31.195858] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:44.200 01:07:31 keyring_file -- common/autotest_common.sh@970 -- # wait 3681746 00:30:45.132 00:30:45.132 real 0m11.173s 00:30:45.132 user 0m24.878s 00:30:45.132 sys 0m2.597s 00:30:45.132 01:07:32 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:45.132 01:07:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:45.132 ************************************ 00:30:45.132 END TEST keyring_file 00:30:45.132 ************************************ 00:30:45.132 01:07:32 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:30:45.132 01:07:32 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:45.132 01:07:32 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:30:45.132 01:07:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:45.132 01:07:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:45.132 01:07:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:45.132 01:07:32 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:30:45.132 01:07:32 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:30:45.132 01:07:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:45.132 01:07:32 -- common/autotest_common.sh@10 -- # set +x 00:30:45.132 01:07:32 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:30:45.132 01:07:32 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:30:45.132 01:07:32 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:30:45.132 01:07:32 -- common/autotest_common.sh@10 -- # set +x 00:30:51.694 INFO: APP EXITING 00:30:51.694 INFO: killing all VMs 00:30:51.694 INFO: killing vhost app 00:30:51.694 INFO: EXIT DONE 00:30:53.071 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:30:53.071 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:30:53.071 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:30:53.071 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:30:55.601 Cleaning 00:30:55.601 Removing: /var/run/dpdk/spdk0/config 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:55.860 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:55.860 Removing: /var/run/dpdk/spdk1/config 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:55.860 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:55.860 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:55.860 Removing: /var/run/dpdk/spdk2/config 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:55.860 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:55.860 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:55.860 Removing: /var/run/dpdk/spdk3/config 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:55.860 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:55.860 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:55.860 Removing: /var/run/dpdk/spdk4/config 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:55.860 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:55.860 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:55.860 Removing: /dev/shm/nvmf_trace.0 00:30:55.860 Removing: /dev/shm/spdk_tgt_trace.pid3275572 00:30:55.860 Removing: /var/run/dpdk/spdk0 00:30:55.860 Removing: /var/run/dpdk/spdk1 00:30:55.860 Removing: /var/run/dpdk/spdk2 00:30:55.860 Removing: /var/run/dpdk/spdk3 00:30:55.860 Removing: /var/run/dpdk/spdk4 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3273360 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3275572 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3276342 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3277549 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3277859 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3279097 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3279164 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3279803 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3281119 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3281962 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3282477 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3282927 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3283486 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3283850 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3284165 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3284483 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3284830 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3285771 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3289022 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3289575 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3289911 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3289963 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3290840 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3291090 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3291782 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3292071 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3292403 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3292670 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3293021 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3293043 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3293884 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3294220 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3294665 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3296806 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3298367 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3300714 00:30:55.860 Removing: /var/run/dpdk/spdk_pid3302642 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3304590 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3306384 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3308205 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3310245 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3312063 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3314013 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3315929 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3317727 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3319798 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3321602 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3323392 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3325465 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3327261 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3329127 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3331131 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3332929 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3335161 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3337361 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3339436 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3341279 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3343459 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3347755 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3400510 00:30:56.119 Removing: /var/run/dpdk/spdk_pid3405505 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3416889 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3422901 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3427390 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3428099 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3439486 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3439808 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3444607 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3451756 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3454770 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3466680 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3476917 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3479030 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3480209 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3499798 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3504620 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3510217 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3512032 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3514364 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3514561 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3514778 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3515019 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3515821 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3518018 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3519272 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3519901 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3522588 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3523427 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3524168 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3528971 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3535539 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3540344 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3549072 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3549131 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3554178 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3554472 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3554769 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3555362 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3555368 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3560822 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3561572 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3566657 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3569912 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3576185 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3582333 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3591746 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3600259 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3600266 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3621250 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3623114 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3625152 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3626976 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3630262 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3630959 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3631765 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3632526 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3633894 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3634509 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3635334 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3636006 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3637482 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3644844 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3644850 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3650533 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3653044 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3655551 00:30:56.120 Removing: /var/run/dpdk/spdk_pid3656894 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3660018 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3661634 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3671745 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3672333 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3672926 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3675952 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3676556 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3677156 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3681746 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3681868 00:30:56.379 Removing: /var/run/dpdk/spdk_pid3683494 00:30:56.379 Clean 00:30:56.379 01:07:43 -- common/autotest_common.sh@1447 -- # return 0 00:30:56.379 01:07:43 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:30:56.379 01:07:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.379 01:07:43 -- common/autotest_common.sh@10 -- # set +x 00:30:56.379 01:07:43 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:30:56.379 01:07:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.379 01:07:43 -- common/autotest_common.sh@10 -- # set +x 00:30:56.380 01:07:43 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:30:56.380 01:07:43 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:30:56.380 01:07:43 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:30:56.380 01:07:43 -- spdk/autotest.sh@387 -- # hash lcov 00:30:56.380 01:07:43 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:56.380 01:07:43 -- spdk/autotest.sh@389 -- # hostname 00:30:56.380 01:07:43 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-03 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:30:56.638 geninfo: WARNING: invalid characters removed from testname! 00:31:18.585 01:08:02 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:18.585 01:08:04 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:19.151 01:08:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:20.523 01:08:07 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:21.894 01:08:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:22.826 01:08:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:24.295 01:08:11 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:24.295 01:08:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:24.295 01:08:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:24.295 01:08:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.295 01:08:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.295 01:08:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.295 01:08:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.295 01:08:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.295 01:08:11 -- paths/export.sh@5 -- $ export PATH 00:31:24.295 01:08:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.295 01:08:11 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:31:24.295 01:08:11 -- common/autobuild_common.sh@437 -- $ date +%s 00:31:24.295 01:08:11 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715728091.XXXXXX 00:31:24.295 01:08:11 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715728091.nsXyue 00:31:24.295 01:08:11 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:31:24.295 01:08:11 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:31:24.295 01:08:11 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:31:24.295 01:08:11 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:24.295 01:08:11 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:24.295 01:08:11 -- common/autobuild_common.sh@453 -- $ get_config_params 00:31:24.295 01:08:11 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:31:24.295 01:08:11 -- common/autotest_common.sh@10 -- $ set +x 00:31:24.295 01:08:11 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:31:24.295 01:08:11 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:31:24.295 01:08:11 -- pm/common@17 -- $ local monitor 00:31:24.295 01:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.295 01:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.295 01:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.295 01:08:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.295 01:08:11 -- pm/common@21 -- $ date +%s 00:31:24.295 01:08:11 -- pm/common@25 -- $ sleep 1 00:31:24.295 01:08:11 -- pm/common@21 -- $ date +%s 00:31:24.295 01:08:11 -- pm/common@21 -- $ date +%s 00:31:24.295 01:08:11 -- pm/common@21 -- $ date +%s 00:31:24.295 01:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728091 00:31:24.295 01:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728091 00:31:24.295 01:08:11 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728091 00:31:24.295 01:08:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728091 00:31:24.295 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728091_collect-vmstat.pm.log 00:31:24.295 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728091_collect-cpu-load.pm.log 00:31:24.295 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728091_collect-cpu-temp.pm.log 00:31:24.295 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728091_collect-bmc-pm.bmc.pm.log 00:31:25.231 01:08:12 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:31:25.231 01:08:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:31:25.231 01:08:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:25.231 01:08:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:25.231 01:08:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:25.231 01:08:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:25.231 01:08:12 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:25.231 01:08:12 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:25.231 01:08:12 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:31:25.231 01:08:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:25.231 01:08:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:25.231 01:08:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:25.231 01:08:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:25.231 01:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:25.231 01:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:25.231 01:08:12 -- pm/common@44 -- $ pid=3694302 00:31:25.231 01:08:12 -- pm/common@50 -- $ kill -TERM 3694302 00:31:25.231 01:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:25.231 01:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:25.231 01:08:12 -- pm/common@44 -- $ pid=3694303 00:31:25.231 01:08:12 -- pm/common@50 -- $ kill -TERM 3694303 00:31:25.231 01:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:25.231 01:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:25.231 01:08:12 -- pm/common@44 -- $ pid=3694305 00:31:25.231 01:08:12 -- pm/common@50 -- $ kill -TERM 3694305 00:31:25.231 01:08:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:25.231 01:08:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:25.231 01:08:12 -- pm/common@44 -- $ pid=3694336 00:31:25.231 01:08:12 -- pm/common@50 -- $ sudo -E kill -TERM 3694336 00:31:25.491 + [[ -n 3162387 ]] 00:31:25.491 + sudo kill 3162387 00:31:25.500 [Pipeline] } 00:31:25.520 [Pipeline] // stage 00:31:25.532 [Pipeline] } 00:31:25.550 [Pipeline] // timeout 00:31:25.554 [Pipeline] } 00:31:25.566 [Pipeline] // catchError 00:31:25.570 [Pipeline] } 00:31:25.585 [Pipeline] // wrap 00:31:25.590 [Pipeline] } 00:31:25.605 [Pipeline] // catchError 00:31:25.612 [Pipeline] stage 00:31:25.614 [Pipeline] { (Epilogue) 00:31:25.627 [Pipeline] catchError 00:31:25.629 [Pipeline] { 00:31:25.642 [Pipeline] echo 00:31:25.644 Cleanup processes 00:31:25.649 [Pipeline] sh 00:31:25.932 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:25.932 3694809 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:25.944 [Pipeline] sh 00:31:26.226 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:26.226 ++ grep -v 'sudo pgrep' 00:31:26.226 ++ awk '{print $1}' 00:31:26.226 + sudo kill -9 00:31:26.226 + true 00:31:26.237 [Pipeline] sh 00:31:26.522 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:36.504 [Pipeline] sh 00:31:36.788 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:36.788 Artifacts sizes are good 00:31:36.800 [Pipeline] archiveArtifacts 00:31:36.807 Archiving artifacts 00:31:36.988 [Pipeline] sh 00:31:37.275 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:31:37.287 [Pipeline] cleanWs 00:31:37.296 [WS-CLEANUP] Deleting project workspace... 00:31:37.296 [WS-CLEANUP] Deferred wipeout is used... 00:31:37.302 [WS-CLEANUP] done 00:31:37.303 [Pipeline] } 00:31:37.325 [Pipeline] // catchError 00:31:37.335 [Pipeline] sh 00:31:37.613 + logger -p user.info -t JENKINS-CI 00:31:37.622 [Pipeline] } 00:31:37.637 [Pipeline] // stage 00:31:37.642 [Pipeline] } 00:31:37.660 [Pipeline] // node 00:31:37.665 [Pipeline] End of Pipeline 00:31:37.696 Finished: SUCCESS